Monday, January 15, 2007
Procrastination Study
http://biz.yahoo.com/ap/070111/procrastination_nation.html?.v=1T.
Then I went to the source and did a little more reading on the authors website.http://www.procrastinus.com/The best part was this page that talked about the various theories. It uses frames so the link looks wierd, but it is the same site.
http://webapps2.ucalgary.ca/~steel//Procrastinus/theories.php
It really changed how I worked on my weekend. There were a couple of tasks that I had been procrastinating on and this really convicted me. Now I have both tasks done and I feel much better. It seems the Delayed Expectancy Theory is the one that really made a difference to me. Those tasks were just drudgery with little reward and I just needed to get them done without putting other quick reward projects first. Now that they are done, the actual reward of relief is surprisingly high. With more analysis, I'm finding a lot of distractions fall under this quick reward category he talks about. For starters, I turned off the ringer on my incoming e-mail.
I hope you enjoy the reading. It really made a difference for me.
Saturday, January 6, 2007
I'm Never Wrong...
I'm never wrong when I'm sleeping.
Is using a title about operating systems going to get anyone to look at this? If I'm going to talk about the future of operating systems there is one thing I've learned. Marketing comes first. And besides, I'm wrong the rest of the time so you may as well stop reading.
The last blog entry ended with a comment a few minutes after I wrote it that we need to look at operating sytems more as a video feed than a finite state machine (FSM). The root of the problem goes back to my hero Alan Turing. Did I know I was going to mention Turing when I started this? No way. I'm just typing and the last thing I would ever do is question his absolute brilliance, but I may be wrong in saying I would never question it since I'm not sleeping. On a side note, anyone read Cryptonomicron? Too long, but a fun book.
Finite State Machines work great to compute a result, but we no longer use computers to simply compute results. They run infinitely have become more of a linear medium than a finite one. The results being that our management tools were all designed to look at an operating system as snap shot of a single picture while in reality they are highly complex video cameras. Why are we still stuck using one dimensional tools on a linear two dimensional object?
The answer is that our tools were always limited by the resources we could bring to bear on the problem. That and the fact that the godfather of our industry (Turing) was only interested in breaking codes during WWII. He only cared about a single end result and once the ticker tape stopped, the tape WAS the result. Our resources were limited in that we/he always had tiny pieces of tape (memory and disk) to process and store everything (overwrite, overwrite, overwrite). We're stuck in a design from 1950 and our complacency is the root of the problem. So the big question I have is: What are you doing with that 500GB hard disk in your everyday computer? Unless you are a videographer, you probably wasted your money and have only used a few gigs.
I'm not saying we need to throw out the FSM at the root of our industry, but maybe it's time to rethink the one-dimensional tape. Did I lose you? I feel like I lost myself, but let me try and explain. I'm talking about a Temporal Finite State Machine. There seems to be an opportunity to build a new type of operating system that relies heavliy on readily available massive amounts of storage space to journal everything (copy that tape) and allow for much tighter contols on security and increased reliability with the ability to diagnose problems (non-linear operation) or changes that have occured in the past instead of only looking at the operating system as a single image of the here and now. Everyone is groaning right now with only thoughts on performance and I agree, performance would be an issue, but this is fantasy land and I'm just brainstorming and you are just along for the ride. I also work at FileNet and we have all kinds of crazy ideas for storing and organizing massive amounts of information so it's not a complete fantasy. I at least hope I stretched your brain a little. I'll write more on this.
Temporal Finite State Machine.... Hmmmm... This is going to kill some brain cells I can tell.
When am I ever going to get to the point and finally talk about why source code causes most of our problems? For now I think I'll just continue down this rabbit hole.
Friday, January 5, 2007
Computers are too complicated
To repeat my earlier stance, computers (operating systems) have become too complex for the tools we use to monitor them. Would we run a nuclear power plant with gauges strewn all over the entire operation? No. We bring all the gauges and controls into a central control room. This is the problem with todays operating systems. We've strewn the gauges all over the place and made them next to impossible to read. I've been working with computers for a long time (I graduated computer science in 88) and I still can't tell you the health of a computer when I walk up to it. I'm not even sure I could tell you the health after a few hours of looking at it. There are tools out there that help with this (Registry Mechanic on Windows comes to mind), but even those don't give much information, they just fix and forget.
Where is the information that we need better access to to monitor the health of a computer? Three places. 1) Archival storage. 2) RAM 3) Process status.
What are the tools we use to monitor these today? 1) File managers (nothing much more complicated than ls). 2) vmstat is the only thing I can think of and that is a crude as it gets. 3) ps (Windows ctrl/alt/del, process tab) and that doesn't give any historical information about the history of the process.
What would it take in those three areas to find a virus just by looking at a proper monitoring tool? (This is sort of fun and I feel like I'm on to something as I start to visualize a new tool that monitors a running operating system). I worked at Sun in the early 90s and Rich Pettit's setool comes to mind, but even that was way too complicated, but it did bring together a lot of disparate data. That's the sort of thing I'm trying to visualize.
You look at a list of process that are currently running. You can see a graph of individual process cpu time and memory usage since it started. You click on the file that is running from that same view and see the change history and mechanism (human or computer) that changed the executable. Maybe some sort of finger printing to see who or what actually made the change and when (Journaling disk drives in VMS were really cool, but never became mainstream). You could see the processes interaction with the network over time and actually drill down into the network traffic to view the traffic as a video stream. You could click on the current memory and see a map of memory and drill down to see what's using what and how much of it. Maybe a sysadmin could see the actual contents of memory with tools to view different types of memory in different ways (this is a big stretch, but part of brainstorming). From the archival side of the house you could look at an executable on disk and see when and where it was run over the past days and what process was starting it and even drill down into those processes as well. Just a bunch of random ideas and I'm sure there are more where those came from.
Given a better view of the internals and history of an operating system, would a layman be able to detect a virus immediately? Given enough journaled information could you reverse the effects of a rogue process? Given a lot of centralized gauges, can an engineer decide when a nuclear power plant is going to melt down and what to do? I say yes to all of these, but I'm not asking normal people to be nuclear engineers. With the current state of the art even the engineers are blind when it comes to an operating system. Even a nuclear power plant's control rooms have alarms and I doubt most of them know what every gauge and switch does, they probably have a big manual and a big panic button. But I'm not talking about the dangers of a nuclear plant. I'm talking about an operating system and the problems we face today. Not enough information about it's current and historical state to be able to diagnose a problem or to know if a change was critical or malicious.
I never even got to the actual root cause. The real problem lies in how we program computers and the complexity of programming languages. Maybe I'll get to it in another entry.
Thursday, January 4, 2007
Anti-virus software and software reliability
Anyway, I keep getting the message pop up that says my machine may not be secure because there is no anti-virus software. I usually think nothing of it, but last night it sort of got to me. I've been paying $10 a year to Symantec forever because the software I use is insecure out of the box. Insecure out of the box!
I drove the car off the lot and I need to go to pep-boys to get an a key system for it.
I bought a brand new house, but I have to go to home depot to get door locks before my family moves in.
I bought an airline ticket, but I have to buy a seat belt in the terminal before getting on the plane.
I'm going on a cruise, but I have to bring my own life jacket.
I ate at Taco Bell, but I may get salmonella. Oh, I guess it is similar, in that one's a virus and one's a bacteria...
I know this isn't just a Windows problem. No OS is completely secure out of the box, they're all hard to use and highly insecure. It's a systemic problem with our industry. An interest in marketing features vs. marketing durability. If anyone finally figures out how to teach the public that features aren't as important as reliability then we'll finally be on the right track.
Maybe there is a lesson here from the car industry? Maybe the Japanese will come along and create Hondalinux or Toyatlux and finally show us that reliability and usability is 1000 times more important than a glossy paint job and slick sales rep.
There has to be a better way to program a computer than cryptic source code and that is another blog entry.