Thursday, July 27, 2017

Change is the Only Constant in Life 

All types of systems have to deal with changes but in computing those changes are more rapid, more abrupt and at more levels than most people imagine.  We conceptually think of software as an asset like a car: it has a finite lifetime and unfortunately it does have to spend time offline in the shop occasionally.

Many users are surprised how much retooling has been necessary for, say, Windows 10 deployments, or why a Web site that worked for years may stop working one day.   

When we deploy a system, such as a typical web based database application or a file server, the very standards to which we adhere (such as SSL or TLS, or CIFS/SMB) change rapidly because vulnerabilities are discovered in older protocols and hackers start attacking live systems quickly.   One day it just becomes unsafe to use the older versions.

Also, most applications are based on ‘libraries’ (clib, PHP, JavaScript, SharePoint, etc.) that provide functionality such as a nice user interface or security features, but as vulnerabilities are discovered or other issues force upgrades, we must create and install new versions, while trying to offer compatibility and continuity to users. 

The browsers (and we test all browsers) and the operating systems get upgraded every few days (Microsoft), or every few weeks (Google, Apple, Firefox) to deal with security or add enhancements and sometimes they remove functionality found vulnerable to hacking.  Yet Microsoft and others often chose not to add new functionality (such as a decent TLS security layer) to older version (such as the still popular Windows 7).

The servers we use to communicate (file servers, DNS, Email, etc.) and policies regarding their use change too.  Even the campus WatIAm system for userid lifetime management is changing this year. 

For many systems there are so called ‘sliding windows’, such as supporting Windows 7 through Windows 10, Firefox of certain vintages forward to a certain release, etc.  We have to pick a minimum supported level and say nothing prior to that version can work.  But our decision is not based on the actual date, rather on the oldest version known not to contain the worst bugs or intolerable vulnerabilities.

I’ve long maintained, our job is to try to provide the “illusion of continuity despite the constant change of the environment”, and it is in the computing industry which is addicted to flux and attached to less than totally reliable electrical and network systems.

Engineering Computing has provided continuous file systems since about 1985, the N: drive and its files have remained there despite many changes in the underlying technologies (Watstar, Network Appliance, and now Samba/FreeBSD and various underlying file systems from DOS to WAFL to UFS to ZFS).   If you logged in every day, everything stayed working despite the changes happening around you.

For practical considerations, a ten year life of a service is becoming impressive in this industry, though some of our built systems have had longer lives with periodic upgrades for things that just work well.   A well-designed local system can last longest because it can benefit from process knowledge of the users.

Purchased local and subscribed cloud systems tend to be more in flux than ever before.   Companies are constantly changing licensing arrangements or selling divisions to other companies.  They have a vested interest in getting people to upgrade yearly to give the illusion of service or higher margins from the same purchasers.  What should be minor improvements are made to be more obvious to give the impression that more is different than really was changed.  (Anyone remember the ribbon bar or paperclip in Word?) 

Whenever a new system goes in, say a financial system or a learning system for example, we must invest much more than the contract amount.  We must retrain people and retrofit many business practices and other systems to reflect the software’s design and we lose some compatibility with the past.