Tuesday, October 11, 2005

The Friendly User

What is particularly dangerous for a system is users that are accepting its flaws because of lack of background in how fast things generally move in a computer. I am parking in a garage that is remotely operated, from Rotterdam to Amsterdam. Once in a while the system thinks my car is in, when actually it's outside; the consequence is that it does not let me enter the carport. Now when I call the service number, the friendly service person at the other end thinks it is normal to have a login sequence of 15 minutes and that it takes half an hour to flip the flag that indicates the in or out position of my car. Yesterday I heard from a more knowledgeable person in the garage company that a software upgrade went sour somewhere last week and the response should be much faster.

Another example of this is when the OS/2 TCP/IP driver in its famous DOS box failed, it had to recover for about 20 minutes, and then it restarted. At the bank where I worked the users duly noted performance problems; which were checked by staff and then closed. When it worked, it worked fine. In reality, the users should have reported a defect (which it obviously was, with outages of more then 15 minutes), so a different crew would have been dispatched.

Wednesday, August 03, 2005

Logo

I loved Logo to pieces in the eighties. I had the Logo cartridge for my Atari 600XL, and I liked it so much that I bought the 64K ‘outboard’ for this machine even though I could not afford it and it make me walk in old jeans and sweaters for a year; not enough nodes to do anything useful in the standard 16K. Now a common misconception about Logo is that it only does Turtles. I spent numerous hours in 1984 to watch random walks of four turtles, but the real use out of it I got was in its capacity as an easier LISP-without-parenthesis. In fact I first understood recursion during my first endeavours with Logo. In Rexx I still use the “parse line first rest first” recursive parse technique that I learned with Logo’s First and Butfirst (LISP’s CAR and CDR). I was excited to see that LCSI still exists and even have their stuff running on the Mac. The manual from 1983 is the only thing I saved (because it smelled rather nice, and epitomized the eighties with pictures of happy people and happy turtles, in a orange and brown design that just became fashionable again). I look forward to use it again for little playthings and sometimes wish I had children to teach it to. Some things in the Mac version need to be fixed, but when they are, I’ll gladly pay for it.

Saturday, July 30, 2005

From the archives: Application on Fire

It must have been around 92 or 93 that our system programmers group was tasked with delivering an application, because the application people themselves were swamped with more important applications. It was a kind of an early information kiosk app. We did this in a few days with two people, me and a good buddy, and everybody was happy. It used CICS, DB2 and Cobol, and a bit of S/370 assembler. It was fast. It contained the phonebook of our company and a lot of information pages that were refreshed by the HRM department staff. It ran fine and we had only one support call for it, ever. I took that call.

Pierre called, who was a fine drawer of cartoons, and a non-technical person. He complained that the application did not react very fast. I duly logged on to Omegamon, and saw that DB2 processed calls, most in about 0.2 s. I looked at CICS, and it ran OK. I looked at MVS and then at VTAM. I could not find the problem. But because lunch hour was approaching, I offered to hop over to the other building; it housed the restaurant and also the HRM department.

I popped my head around the corner, and spotted this particular performance problem: a very small trickle of smoke emanated from his terminal, forming a black-mini tornado going to the ceiling. Pierre looked a bit suprised when I yanked the plug from the socket in the floor, but he was happy again when we had a new terminal installed that afternoon.

Tuesday, July 26, 2005

RIP OS/2

Rest in Pieces, OS/2. Of course OS/2, though officially disavowed by IBM, is not really dead. The German company eComstation maintains the code and sells you an OS/2 if you want to buy one, though a bit pricey for my taste. I remember buying OS/2 1.1, or trying to. I was grilled by IBM people on the phone, threatening that I would “lose all my programs” if I installed it, asking what I’d do with it, suggesting I could just loan a copy from work until they located a supply in the Netherlands. My answer on their “why” question was that I would like to develop for it; this was met with a sad silence. But did I know it ran DOS programs very badly?
But seriously, I loved OS/2, and still boot it occasionally under Virtual PC, if only to hear the “whoop” sound if a folder is opened - remember sound on a pc was new then. A bug in OS/2 Warp sounded uncannily like an old Amsterdam tram. WPS was far ahead of its time, and of course OS/2 was the first worthwile and stable multitasking pc operating system. It allowed Rexx to enter millions of living rooms and corporate offices, where previously it was mostly confined to VM and MVS. I have the Microsoft OS/2 1.2 API books, which were of course (and deliberately?) incompatible with the IBM development kit. MS also fought to exclude Rexx from OS/2 SE, because it knew that it ate Basic alive. Later on I was involved in a leading edge OS/2 project that needed weekly builds of the OS to stay ahead of the bugs we uncovered; it only grew stronger. Twelve years later, Windows still needs 256M for what OS/2 did in 16M. OS/2 became useful in release 1.3 when IBM rewrote most of it in assembler. Also, in this version it finally could print. The 32 bit 2.0 version really was a better Windows than Windows, forcing Microsoft to release a bogus device driver architecture in W95 to break compatibility. Even that was trapped and emulated by IBM OS Wizards for most applications that “needed” it (all three of them).
When the company I worked for all those years decided to go Windows (undoubtedly inspired by cowardly IBM sales persons), we had to roll out Office in OS/2 Warp, because the then current hardware base did not run NT.
What IBM did to OS/2 is a crime perpetrated by management against their engineers and to the general public that was deprived of any competition until others came around.
It sold its pc’s halfheartedly with the wrong video drivers installed for OS/2, until every small nephew of my wife knew that you had to scratch this “o-esse-dos” thing immediately. And we are not even talking about marketing it with a stop-sign logo and nuns in a convent, or even naming it ‘half an OS’. It did not react at all when MS employees were astroturfing the OS/2 Usenet fora in working hours - this was the time that IBM Legal could have just shut down MS waving unspecified patent or publicizing MS fora mores. Sometimes I have dark images of IBM doing this deliberately to finally put a stop to all these antitrust cases the US Government foisted upon them.

Don't forget to sign the OS/2 Open Source petition! It will never happen though.

Friday, July 22, 2005

Caps Lock to the rescue

After a few months with the Matias Tactilepro I grew restless again when I noticed that ALPS keyswitches are not the same thing as Buckling Springs. They do not become softer as BS keys would, and keep a nasty 'tack' feel and are extremely noisy. Not that the Matias is a bad keyboard, I prefer it over a membrane keyboard anytime. So it was time to hook up my Unicomp again, the rightful heirs to the IBM Model M. As this is a PS/2 keyboard with DIN plug, I run it over a Sitecom USB-Dual PS/2 Adapter. The keyboard is a special one, a 3270 style emulator keyboard with 122 keys. Lamentably, the last system to support it well was OS/2, so it does excellent service on my OS/2 based P/390 mini-mainframe. It has no Apple key, which means that it has no Windows key - one of the more daft developments of the last decade.

So I noticed new stuff in Tiger's keyboard menu- the ability to switch these key functions to other keys - and YES: *caps lock*, that anachronistic holdover from the heavy typewriter era, can be switched to *command*, which is the official moniker of the Apple key. Emacs, of course, does not even blink, as I switch the function keys I am using in .emacs to available, working PF keys.

Ok Apple kernel guys: now make a driver for my 122 key 3270 Unicomp, so I can access all function keys, and do an Attn, SysRq, CrSel and exSel (not to mention ErEOF) when I want to. Who is the Apple OSX keyboard driver champion?

But the Caps Lock switch is pure gold: the most irritating, struck-by-accident key given an innocent, and when intentionally pressed, useful and necessary function.

Wednesday, July 13, 2005

"Save As" Considered Harmful

It recently occurred to me that I was missing out on many opportunities for reuse of work because editors nowadays allow you to start on a blank page and do a "save as" until which there is no filename attached to that particular piece of work. In the days that I used ISPF/PDF as my sole editor, it repeatedly happened that I set out to do something I apparently did sometime earlier already. In editing a member of a partitioned dataset, you need to specify the membername first. And then, boom, there it was, the exact thing I was planning to enter, just because my naming algorithm seems to be stable and predictable. This saves you from remembering every little bit of maintenance work you ever did. It is a pity it went away, so now, when we do not remember the past, we are literally forced to repeat it.

Thursday, June 23, 2005

Incorrectable improvements

The NetBeans story (about how it went from usable to bondage & discipline) goes further. After wasting an inordinate amount of time, we spent some more time at work, because just throwing away the NetBeans-generated GUI programs is not going to help anybody. So we got it to import the project. And the first thing it does is pop up a message box, suggesting to delete all the .class files because they are in the source path. WHY? It is correct they are there, we put them there after careful deliberation. We value a single classpath root. Some tools (even in the SDK) expect them there. It also makes for quick visual inspection of the generated classfile. But WTF should a tool care where I put what? We found the project properties. They were hidden behind a right click at a spot they did not used to be (and we never did need them, being able to mount the jars we needed). So now NetBeans also writes metadata all over the place, in places I need to find it to delete it again. Although we cannot drop it right away, we will be looking around. Or Coyote must be so brilliant, that I’ll be able to use it for ooRexx and NetRexx.

Saturday, June 18, 2005

The sorry demise of NetBeans

Some people managed to totally destroy NetBeans usability for our project. They must have been taken over by the Borg. Let me explain. We have a fairly big application that is made more or less in Java. It is actually written in NetRexx, but that is not really the point here. For GUI development, we used NetBeans. Draw a screen, add widgets, doubleclick and add calls to our own methods. Great. This was up until NetBeans 3.6. And it was easy to add the project to NetBeans: our codebase is make and cvs (now subversion) based. Just add the classpath root to a NetBeans virtual file system, let it scan and it works.
In 4.0 and 4.1, not anymore. Not at all. This confirms my worst preconceptions about IDE’s. These people decided to just take out the extremely useful feature of virtual filesystems and make the thing totally ANT based. The new 4.1 release “is even more flexible” and adds “free form projects”. Free form, my ass. It immediately complains: cannot add project that already has a Build directory. I try now to add a “standard project” (where standard also means “ANT”). No dice, because it ‘is already owned by another project.’
I don’t want NetBeans to build my project. I just want to press F9 and compile. It does not let me anymore, and I waited patiently for 4.1 to correct the situation.
There is lots of docs going with this, touting its flexibility, though totally dense and milling on and on about ant. I already spent a lot of time on this, and it did not help. I think I have the option now to add all the packages and subpackages of the hierarchy by hand. They must have totally lost it. I already saw some complaints on the mailing list and was stricken by the sheer arrogance of those people knowing it better than those who suddenly lost their ability to work with the tool. So by moving and renaming a lot, I got NetBeans to digest the project. I edit a file, and IT GREYES OUT the compile option, probably because of some error in an ant file I don’t want in the first place.
There also seems to be a “blueprint” now for enterprise java project layout, without a doubt devised by ‘technical project manager’ people that never design or code but bestow ‘naming conventions’ upon us that do. But taking working functionality out of a tool to make people conform to your ideology is a very, very sick thing to do. So I stay at 3.6 until I find something better. Bye bye NetBeans, it has been fun while it lasted.

Monday, June 13, 2005

Choice

Write a reasonably complicated web application and the testers only complain about missing links and faulty graphics. This is great because these problems are easy to solve, if problems at all. Worse is that it takes the time that was needed to find and fix the real bug you know there must be out there.

This week saw Apple switch to Intel and Jamie Zawinski switch to Apple. Both worry me a little. I agree with Robert Cringely that the leakage of MacOSX for Intel is probably a plot to bait more switchers. For me the worriest thought though is that my main platform will be mainstream one day. The decision where to switch my primary web server to became much more complex this week, but most probably it will be a Mini Mac anyway, but I am seriously doubting MacOSX server in favour of Yellow Dog Linux; read some discouraging statistics on Mach-BSD thread forking in MacOSX Server. Not that it will ever be a high performance server anyway, but give me a break: 5 times more overhead on threading? It certainly calls for hyperthreading ;-).

Lets hope all turns out well. I am pro choice, so it would be better if Apple would just have introduced a parallel line of hardware architecture, giving people the choice to, for example, keep on buying the PPC machines for high end machines, for example with added reliability features like parity memory and a service processor. Or run OSX on IBM’s rock solid RS/6000 and other POWER hardware, so I won't have to complain in every post about how much more trust we can put in a mainframe compared to the dinky machines we trust our data with nowadays.

Wednesday, June 08, 2005

BSF und die umwertung aller werte

I was quietly working at the Mac port for Open Object Rexx and the related BSF4Rexx, when Apple dropped the bombshell of doing a "switch" for themselves, this time to Intel CPU hardware. They must know something we don't know, because switching your 64 bit OS to a 32 bit Pentium 4 (quad 3.6Ghz) does not really make any sense. Dvorak predicted the Itanium, but it turns out to be ordinary x86; we have to forego The Cell, which is a pity, or switch ourselves to Linux, which probably will run soon on it. I am glad my own stuff is all Java, it only muddles the picture a bit for the C++ ports, which are a headache anyway.

XCode 2.1 came with a new gcc 4.0, that has new compiler errors in previously compiling code; such is the price of progress. Another day, another ABI. I am still looking for a good porting guide from Linux to MacOSX, one that explains that thread_t is a structure in BSD and not a pointer, and why my files do not open when the code is compiled with Tiger, while they work when compiled in Panther. With some bad luck the Intel situation multipies these kind of problems. The advantage is that I *do* know some x86 assembly, while PPC always was a black hole of a myriad of addressing modes and load/store interleaves.

Boy, I am glad my programs are Java. And Rexx.

Wednesday, June 01, 2005

Keynote Poster

The projectmanager left and I wanted to produce a poster and did not have a lot of time. It had to at least contain the screens of our webapp and some photographs of the team, I also wanted to include most of the graphics we produced for the presentations of the product in the past year. Now I knew that doing this in Photoshop like you are supposed to would have cost me a certain amount of time, that I, also due to the deadline which was connected to the managers departure, did not have. So I tried it in Keynote, a presentation package. I figured that if I could make a slide of a high enough resolution, and then export this to PDF, the printer could plot a sharp enough poster out of it.

The big advantage here was that I could just drag and drop the material, line it up using the automated guides, crop the photographs and send relevant pieces to foreground and background, and be ready in a nick of time, compared to all the layering work that PhotoShop requires for this (combined with my relative inability to use that program well).

So I defined a 4000*4000 slide with a white background, dropped in the screenshots in the pure uncompressed tiff they were made of, dropped and cropped the photographs and dragged and dropped the pdf vector graphics from the other presentations. The titles I did with very large Zapfino and Hoefler text (200 to 300 picas), and put in some backgrounds unsing the standard geometrical figures.

The machine became a bit unresponsive when I finished up the work, and my impression is that when I started adding graphics with alpha channel there was more work to do for the machine. I exported the PDF to a standard X3 format and went to the printer, who luckily is situated just around the corner. After some initial anxiety when the Sony Vaio machine she ran PhotoShop on took several minutes to load the rather large pdf file, we printed a poster of 1 meter by 1 meter and it came out lovely and sharp, and the lady even remarked that she did not yet see a font that came out this sharp on the Epson plotter that was used. So hey presto, I know how to do it next time.

Saturday, May 28, 2005

Open Source WRT54G

Forgot to tell you that I chose the Linksys mainly because the source is GPL'led, so you can download it and have a look at it. Also, other people than the supplier (CISCO) itself are able to fix any problem, and a lot faster than in a closed source setup. So to stimulate progress, and foster new development, we should buy these products. There are versions that support other functionality, like ssh, in the router itself. With the previous router, some years ago, I had to wait a while before it worked sufficiently well (it had some issues with passive mode FTP and its NAT tables), and it was not a good thing that I was totally depending on the supplier to fix it. Luckily, we are moving to a world in which communities can solve these problems themselves, and also support the units way longer than is ordinarily viable for suppliers.

Friday, May 27, 2005

New network infra

I experienced stalls in dowloading since my ISP increased the bandwidth to 8 Mbit/s. This is fairly hard to diagnose, at least for me. Some testing exposed that when I power cycled my ATM modem and router, the connection picked up again, and combined with resumable downloads it was only a slight nuisance. Worse was, that my websites also lost connections when there was an outage of this kind. Because I only had this trouble when there were very fast downloads, I suspected the modem or the router. The 8-port ethernet switch is rather new compared to the other boxes, so I guessed that was not it, and neither was the old Airport. So today I started with phase one: I swapped out the Netgear RP114 for a Linksys Wireless DSL router, that also gives me IEEE 802.11G instead of B. Although the helpdesk said it was probably the Alcatel modem that could not keep up, I observed that when I only reset the modem it did not restore the connection. So if I, apart from the increased throughput I already have, experience one other stall, the Alcatel also goes. Although I probably should try to tweak it first into the Pro version, that seems to have better throughput if one may believe the rumours.

Tuesday, May 24, 2005

GUI - thanks but no

I make GUIs, but I do not like them. I think assembler macros are still the best way to put parameters into a program or operating system. This is because the process is reproducible, you can check for errors before submitting them to a system in a much more structured fashion. It is much easier to make controlled procedures for promoting changes through environments to production systems this way (like you should and you know it).

Call me old fashioned, because I am. There must be an enormous waste of time and money going on because of pimply 'network administrators' (a job title we reserved for people knowledgeable in VTAM and the rest of SNA, and TCP/IP) type in different values in every pc they find and do not even write them down. Instead, everything that is of value should be in text format in version management. Disregard that at your own peril -- mess with GUI configuration dialogs and you could be out of a job somewhere in the future. JBoss uses XML configuration, and that is OK, we have them in SVN. Although XML is not made for people to read, it is the best alternative to date.

Sunday, May 22, 2005

Box the device

Lots of people who use Apple hardware and OS think the company can do nothing wrong. After spending the day developing in Panther and the evening restoring my backups on a fresh, zeroed-out all clusters install of Tiger (and about the same amount of time typing in serial numbers) I do not feel the same. This is because:

1) When a disk, or driver cache or other system software component fails, the machine should halt with a clear message of what is wrong, it should avoid the error by assigning spare clusters or boxing devices that deliver spurious interrupts on the bus. It should not crash and throw a nine fans in overdrive, scaring my wife and cats. It should have a service processor to take care of this, this should cost a minute fraction of the effort that is invested in the video card (that I did not choose myself because I have some graphics companies CTO machine).

2) This is the second machine of this type that gives me grief. The first went back after a week. Due to (1) I still do not know the problem, I suspect it is heat. Because the machines are designed to just about keep working, and not to fail cleanly when critical parameters are crossed, it is hard for me to trust it. This one worked a few months without a glitch, but boy am I glad that I do make backups.

3) They should license their OS for other machines, perhaps IBM. I would be tempted to buy an IBM machine that does have all diagnostics for a bit more money, but knowing there is parity on the memory and checking of other critical values. With AIX like it is now, that is no option, and neither is Linux. Or they should start building dependable machines, like that G4 I had for almost two years.

Saturday, May 21, 2005

Afterburner

All things worse than Murphy struck again, just because there is a deadline. While working away, I suddenly realized how to solve the rexxutil.dylib loading problem. Because the XCode 2 tools and certainly the gcc 4.0 compiler are not up to scratch yet to compile Open Object Rexx for the Mac, I had to restart from my Panther 10.3.9 disk with XCode 1.5 to compile the fix I just entered. It ran, the Sysxxx util functions are now automatically loaded by the interpreter and it looked like a 5 minute diversion from the productive weekend I planned.

Then the Mother of All Murphies struck when I restarted from 10.4.1. I did not, showing me an Apple and throwing the G5 in afterburner mode, which it apparently does when it loses all track of the temperature sensors and must assume that it is really *hot*. Our two cats came in the room to check out if there was danger.

It took me 10 minutes to figure out how to open the CD door on boot, repaired disk and disk permissions, set the boot partition again, and started. The jets took off immediately. Proceeded by resetting the NVRAM and PRAM as directed on the website (you should always have two computers hooked to the net).

Nothing. I did an archive and install, losing costly time in the future by now having to reinstall all commercial crap software from Adobe and Macromedia that I am using. Avoiding this time wasting on activation and serial number only must be a valid reason to switch to open source only.

After reinstall: the same. I guess there is a faulty track on the disk just where it reads the boot code, or otherwise there is some kind of error in the fastboot, kextchache or other caches that Tiger install did not solve. So I spent the rest of yesterdays evening moving work over to my other disk (you should have at least two disks in every machine) and booting from Panther. And (sigh, see yesterday) compiling emacs again. This time I am certain: their make clean, nor their make maintainer-clean cleans out all of the libraries, you have to do an uninstall, nuke the emacs directory, check out a clean cvs copy and ./configure and make bootstrap. But we're up again, and Object Rexx on the Mac has one error less.

Thursday, May 19, 2005

Emacs on OSX

Apple does deliver Emacs on OSX, but unfortunately a terminal window, character mode only version. What I use is GNU emacs, the Carbon port from Andrew Choi, who 'defected' to XEmacs; I do not have a standpoint here because I do not care and only want to use what works. Apple does have an Aqua port of Emacs on its website, that looks promising (great fonts!) but does not seem to be quite there yet.

What I would like to know is why Emacs, compiled from source on my own system, breaks on every OSX upgrade, now even from 10.4 to 10.4.1. I gather it has to do with C++ and GCC, which is changing its ABI every sub-release. Question that remains is why big apps as Motion and Office do not break.

For comparison, Java code keeps running all the time if it is post-1995-beta level. I've got mainframe (here we go again) load modules (what we would call binary executables) from the seventies that still run. This platform also switched from 24 to 31 bit addressing, and more recently to 64 bits. I guess those people were smarter.

This sums up the developements in the trade (a little jealous of someone else's tagline:

The old days: Smart People in front of dumb terminals
Now: Dumb people in front of smart terminals

Wednesday, May 18, 2005

JSF II

Another gripe about JSF: its event handlers also fail silently if you mistype or mis-think a method name. These event handlers were modelled, as we read in the documentation, on the Swing event model. In Java bean events, which is the event model used by Swing, you register an eventhandler explicitly with an addXEventListener. This leaves the method pointer in a structure which is looped over when firing these events. In JSF, though, there is no registration. You indicate that you want a ValueChangeListener, its implementation is just a method signature of name and one Event parameter. Miss out one one of these, for example by mixing up the name when it is late, and you will never know, because it calls the other method and there is no diagnostics at all, neither at compile time nor at runtime.

It was commented that this is the point that I refer to some older mainframe software technology that just worked. I will refrain from that now, but just want to mention that CICS, ISPF and IMS DC all just work. It is the failure of the customer base to not employ these technologies anymore in new systems; this has to do with the powerless position IT managers are in nowadays and the general devaluation of standards due to the mickeysoft era.

On the whole, JSF is not bad technology, it just needs to be fixed to lower the frustration level.

Tuesday, May 17, 2005

JSF

JSF qualifies for the title of 'most frustrating software development infrastructure component' due to a few design and/or implementation features.

1) The symptom dumps are almost useless as they seldom point to your own code
2) It disregards null pointers so things just stop working without you having a clue where and why; one should be extremely alert for things to keep working
3) It is absolutely unclear why and when the cache of your web browser must be flushed; sometimes errors are already fixed; you go to sleep desperate and when you fire up the server the next day, everything works; on the other hand, working code fails at customer sites before you flush the caches

This does not mean I do not like it; there is just a great opportunity to improve the design and code so it will stop wasting other people's time.

I recently had another look at IBM's mainframe ISPF in the GML version. This is more or less the same idea, and of course IBM did not realise the gold they are sitting on; change the tags to xml, make a version of the compiler that spits out servlets and we are done.