Tuesday, September 9, 2008

Nigel -This one's for you

Thought you might like this article. http://news.bbc.co.uk/2/hi/science/nature/7604293.stm Perhaps they'll also discover how Brit's can eat Steak and Kidney pie ;-}

Friday, August 22, 2008

Published prediction - LINUX

What Linux Will Look Like In 2012

Our open source expert foresees the future of Linux: By 2012 the OS will have matured into three basic usage models. Web-based apps rule, virtualization is a breeze, and command-line hacking for basic system configuration is a thing of the past.

By Serdar Yegulalp, InformationWeek -->Aug. 14, 2008 URL: http://www.informationweek.com/story/showArticle.jhtml?articleID=210002129

What will desktop Linux be like four years from now?

In the time it takes most college students to earn an undergraduate degree -- or party through their college savings -- Linux will continue to mature and evolve into an
The gOS "Space" desktop distills the Linux desktop to its bare, Web-driven essentials.
(click for image gallery)

The single biggest change you'll see is the way Linux evolves to meet the growing market of users who are not themselves Linux-savvy, but are looking for a low-cost alternative to Microsoft (or even the Mac). That alone will stimulate enormous changes across the board, but there are many other things coming down the pike in the next four years, all well worth looking forward to.

Over the course of the last four years, Linux has taken enormous strides in usability and breadth of adoption. Here's a speculative look forward at what Linux could be like a few years from now -- or, maybe we could say what Linux ought to be like.

For-free Versus For-payExpect to see a three-way split among different versions of Linux. Not different distributions per se, but three basic usage models:
1. For-pay: Ubuntu's in-store $20 boxes are a good example. For a nominal cost, you get professional support for Linux as well as licenses to use patent-restricted technologies (e.g., codecs for legal DVD playback).
Expect this to at least gain nominal momentum, especially if the cost is no more than an impulse buy and people understand that Ubuntu can non-destructively share a machine with Windows. Also expect at least one other Linux company to pick up on this model (openSUSE, for instance), and to have preloads on new systems incorporate such things if they don't already.

2. Free to use: This is the most common model right now -- a free distribution with support optional, and additional optional support for closed-source components: proprietary, binary-only device drivers.

3. Free/libre: These distributions contain no components with patent encumbrances or other issues, in any form. Distributions like gNewSense or Blag Linux already do this, and an upcoming version of Ubuntu (8.10 / "Intrepid Ibex," due in October) will also feature a wholly free installation option. What's also important is that over the next few years, the distinctions between these three licensing models will become heavily accentuated by both the Linux community and by the creators of these distributions themselves. This should help solidify for many non-technical people the distinction between free-as-in-speech and free-as-in-beer.

KDE 4's new desktop metaphor promises to give Linux users a radically new desktop experience.
(click for image gallery)

The Desktop

This year we've seen the appearance of a number of possible models for the Linux desktop of four years from now. One is KDE 4, which despite a rocky first release is quickly drawing attention for its forward-looking approach to desktop management. Its new desktop metaphor, named "Plasma", has just started to strut its stuff. After four more years and a bit more third-party development, it stands to be a lot more than just a visual curiosity, and become an actual way to get work done.

If KDE 4's new approach is too daunting, the Mac OS X inspired gOS desktop -- especially in its "Space" incarnation -- distills the Linux desktop down to its bare essentials. The gOS interface also serves as a front-end for many common web applications, one of the biggest ways people will do work on Linux in the first place. Expect to see many more variations on these kinds of stripped-down, click-and-go interfaces as ways to allow a growing base of non-technical Linux users to get on board with Linux. Pros will always still be able to drop to a command line, though.


Right now, in 2008, Linux is present in a great many hardware devices without most people ever knowing about it. By 2012, it'll be a brand name unto itself, thanks to the exploding netbook market, where Linux has proven itself to be a solid way to build an inexpensive computing platform. By that time, many first-tier manufacturers like Dell ought to be offering such devices -- and those that already do (like HP) will probably be looking seriously at offering more Linux-based gear. (As of this writing, Lenovo's just announced the IdeaPad S10 netbook with Linux in certain territories.)

Phones are already among the devices nowusing Linux as well, and it's also a growth market. ABI Research projected that by 2012, Linux will be powering something like 40 million mobile devices shipped that year alone. The definition of "mobile devices" is also expanding: in addition to netbooks, look for a great many Linux-powered devices with open architectures (the OpenMoko FreeRunner, for instance) that are designed to move between niches and fill more than one need at once. No discussion of Linux hardware would be complete without some discussion of hardware compatibility. Obviously there's going to be increased attention towards open-source device drivers for existing hardware, but another trend is the growth of hardware with open accessibility and standards. If any major hardware maker doesn't have Linux drivers for their product by 2012, either as a first-party product or as a community effort, they can expect to be singled out for it almost immediately.

Asus's "netbook" Eee PC and similar machines are just the first of what promises to be a healthy platform for Linux.
(click for image gallery)


What'll you be running on Linux in four years? Chances are you'll be running a lot of what you have now, just with a new revision to the left of the decimal point. OpenOffice will be either in or fast approaching its fourth revision, with features like interoperability with Microsoft VBA macros, a native 64-bit edition and quite possibly an entirely new interface that isn't hidebound by the program's legacy requirements.

Another important thing to expect is the use of the browser as an application deployment framework, or at least attempts at same. This is already happening to a great extent on multiple platforms -- e.g., Gmail instead of Outlook or even Thunderbird -- but projects like Google Gears are aimed at making the desktop, the browser and the network work in both connected and disconnected ways.

StorageAs of this writing, a 1-terabyte consumer-grade drive has hit the market for about $175. In four years, a terabyte will easily be half that much, and a home media server with an array a few terabytes in size wouldn't be out of the question. One possible way to organize all of that space is through Sun's recently open-sourced ZFS file system, which allows easy growth and management of file systems.

Right now, however, the licensing for ZFS only allows it to be used in Linux's user space -- not an impossibility, but perhaps over the next few years Sun can allow ZFS to be relicensed in a more GNU-friendly fashion to allow it as a kernel add-on. (It's also possible to run ZFS in an OpenSolaris implementation such as Nexenta, along with all your other favorite Linux-y apps.) System ConfigurationIs it optimistic to expect that by 2012, command-line hacking for basic Linux system configuration will be a thing of the past? One can hope, especially for things like display configuration, which should be auto-detected and configured touchlessly. This is crucial if Linux is to make headway with regular users, although putting Linux on devices like netbooks, where the hardware is a predictable and controllable factor, should help.

If there's any system configuration issue that divides Linux devotees, it's package management -- how to handle the wealth of software installed in a given Linux distribution. It's probably unrealistic to expect the plethora of distributions out there to consolidate on a single, one-size-fits-all package-management system, especially since each distribution tends to be married to its particular package management system. That said, the use of something like PackageKit as a packaging-neutral front end for a distribution might make transitions easier.

Also, the Conary package manager project offers some possibilities that deserve broader adoption, such as the ability to download and apply only changes to a particular package. That saves on bandwidth, which in turn ought to be a bonus for Linux users in developing countries where bandwidth is at an extreme premium.
Virtualization Virtualization in the Linux kernel -- either in the form of KVM or Xen -- will make it that much easier to run Linux side by side with any other operating system, either as a way to migrate non-destructively from an existing Windows installation or as a way to expand Linux's own native functionality (for instance, by running multiple kernels each tailored for different needs).

Another possibility is to allow Windows apps to run side-by-side with Linux apps, using a ReactOS to work as a Windows container under this scheme.

Linux On Servers

It's almost foolish to expect Linux's dominion on the server side will wane -- servers are where Linux has fared best, and all the signs point to that only becoming all the more the case. The real question is, in what form?
A major part of the answer lies, again, in virtualization: Linux's mutability allows for its use not only as a server platform but as hypervisor and container for other operating systems. That said, there's more than one way to do such things -- KVM and Xen are two major contenders, but both function in markedly different ways and are probably best suited to different types of work. Xen's best for running as close to bare metal as possible, but KVM lets a particular Linux instance function as a container for other OSes. To that end, over the next four years, the question won't be "Which one's the winner?" but "Who's using each for what?"


The difference between the Linux of four years ago and the Linux of today is striking enough -- not just in its diversity, but in the way it's consolidating its strengths as a server platform, an OS for portable devices and emerging hardware markets -- and as a way to make the most out of whatever else we see in the next four years, too.

Web 2.0 Tool

It would appear that money is the defining point between the 1st generation web and Web 2.0. John Batelle (http://www.imediaconnection.com/content/7486.asp) explains, "Version one of the internet was short on execution, very short on profits -- the market and the technology were not ready for all our ideas. Version two is long on execution, it's long on profits, and I think, actually, there's a lot more opportunity now to start new companies."

If profit is the motive, then the tools that are being developed and deployed should be viewed as opportunities to make money. Further since many of the tools are open source there is a an usual market condition. Tools don't make money by being purchased but rather by being used. So the question then becomes how does wikis, mypages, blogs, etc make money for somebody?

The subtlety is that just about everyone of the "new" tools have a link to money-making site(s), advice or links area (also pointing to money making sites) . The Google model follows Batelle's thought's by making more money - big money - by providing a service that also extracts cash from users and advertisers.

I understand that there are altruistic developers out there that are developing free or open source tools for Web 2.0 but the main force behind the internet is $.

The mainstream tools such as wikis, searches (like Google) and blogs are by far the most commonly used. I have also used PERL scripted web site development software such as metadot. One of the real advantages of these packages they allow addition of widgets, gadgets, or services to customize the interface and fucntionality to meet the users requirements.

Thursday, August 21, 2008

Security Alert

University of Alabama at Birmingham August 19, 2008
Spammers Go Down To Georgia; New Attack Exploits War in Former SovietState

The University of Alabama at Birmingham (UAB) Spam Data Mine is showing the war in Georgia is being used to evade spam filters.The university detected a mass spam attack, collecting more than 500emails in a 90-minute period, carrying a link to a fake BBC story thatGeorgian president Mikheil Saakashvili is homosexual."Clicking on the headline or the image, which is really being loadedfrom the BBC web site, will take email readers to a virus-laden webpage," said Gary Warner, director of computer forensics research atUAB."The danger is that almost no antivirus products detected this viruswhen it began to be distributed this morning. Only four of 36 antivirusproducts knew this was a suspicious file in our tests this morning."Spamming on current news topics is not new, but the rate at which theattacks are foxing anti spam filters is worrying.Several of the servers sending out the spam are from within Russia,according to Warner, but this was unlikely to be a government organisedattack despite the use of state servers."Several of the computers being used to send the new spam campaign arein Russia, including at least one computer owned by the Federal Agencyof Education," he said."These spam messages serve a dual purpose: propaganda attack againstGeorgia, and adding of compromised hosts to botnets controlled bypro-Russians."

Question for the Class

HELP! How can I more effectively keep up on all of the postings on BLOGs? I'm struggling with trying to keep up with posts to BLOGs and participate /comment to those blogs. How are you compiling the additions to all of the blogs? I tried using an RSS feed but not all of the changes are showing. I'm wasting huge amounts of time just trying to read all of the BLOGs and bouncing around from BLOG to BLOG.

Draft Futuring Prediction

Socio-Technical Futuring
Technology Prognostication (draft)

Issue: Attempt to define the Impact and Future IT Technology support for education

Background: Education has long been the foundation of our society. Issues and problems in the current US education system are reflected in social problems such as chronic unemployment, crime, social unrest and continued inequality.
As such, the educational system from pre-school to post graduate study must improve to meet the next generation problems. As a backdrop to the chronic need to improve the educational system, there is also a need to improve the competitiveness of the US in the world market.
The current educational philosophy provides a structure curriculum that provides a brick and mortar experience with teachers and other students. The socialization provided by this experience is a critical part of the learning experience; however, the process for imparting knowledge to the students should be reviewed.
The current philosophy for teaching is similar to physical training which is achieved by repeating a task until “muscle memory” is achieved. The future of education and the imparting of knowledge should change to meet the future challenges.
Further, for students that can’t take advantage of brick and mortar schools, must be provided an equivalent learning experience.
The education process is based on a couple of key concepts:
· Learning through memorization,
· Mentoring of students by faculty in difficult areas,
· Practice of learned principles,
· Measurement of learned principles through tests

Education has been hindered (to a varying degree) by:
· Social skills of the teachers
· Time available for teaching required skills
· Varied learning speeds of the students
· Reliance on past learning methods

Discussion: The needs analysis approach (as discussed in the earlier white paper) is used to establish the framework for the technology forecast.
Table 1 provides an overview of the need analysis. In twenty years, students will face a bewildering array of information available from sources as diverse as the follow-on internet, vast data warehouses, and the composite accumulation of information from the human experience. The needs for students at elementary, high school, undergraduate and graduate levels of study will greatly increase. Therefore this forecast will only look at the future needs of the elementary students.
The future student will not be required to attend a brick and mortar school; however, some vestiges of the schools will remain to encourage social growth and interaction. In general, the student will no longer deal with keyboards and other mechanical means of interface to the educational system. Communications will be accomplished through voice and movement with sensor systems to “read” the student’s non-verbal responses.
Educational learning method will be tailor to the individual by unlimited availability of all teaching venues. Thus if a student is more arts inclined, advanced arts programs will be available and specially tailored math programs for art students will also be available. Further since the manner of presentation of information is critical to understanding, educational venues that are tailored to an individual’s method of understanding will also be available.
While a universal language may not be available for all students great strides will be made to consolidate curriculums around a couple of major languages. From these major language “strings”, a more consistent approach to education will be developed. This consolidation will allow more effort to be placed on the development of individually tailor-able educational experience.
Finally, significant progress will be made on the next generation learning model. The current educational model is based on repetition much like athletes’ use practice to define “muscle memory”. The new model will take advantage of the inherent learning process of humans to modulate progress, minimize needless repetition and provide a better understanding of the underlying concepts. This model will be based more on the investigation or curiosity than structure learning.

Table 1. Need Statement Overview
Need Area
Time Frame
Resolved need
Possible Technologies
Technology Prediction
Educational Time
+20 yrs
Improve the time for educational learning for both brick and mortar and virtual student
1 person transport, non-space travel, multi-terrain, quiet
IT broad band at home (H),
Limited Brick and Mortar time (H)

At home education with social interaction opportunities
Learning method

Provide efficient means for improving comprehension of subject
1 language educational system with increased emphasis on tailor-able individual options
US wide consolidated educational system (H),
Tailorable learning (H)
Virtual classroom based on a single educational system. Tailoring of the instruction will be provided through limited AI.
Learning Method

Provide efficient means for improving comprehension of subject
Technology aided interface for education that support very young students
Sensor enabled response monitoring (H),
Non-mechanical interface with computers (H)
Language based computer interactions with sensor feedback
Learning Method

Provide efficient means for improving comprehension of subject
Break through on increase retention of knowledge not using the “muscle memory”
Curiosity based education (M),
Unstructured Learning (L)
Methodology will be developed to expand the natural curiosity approach to education

Conclusion/Prediction: The classroom of the future will be significantly different from the current classroom. A significant part of the improvement will be technology based; however, significant strides will also be made in the basic educational methodology. The future class room will include state-of-the-art computer interfaces that allow interaction with the computer for even the youngest children. This interface will be human interaction based using natural language and sensor perceptions of the child. The interaction will be focused on a limited number of primary languages and will be more universal across countries. The methodology will be adaptable to each student through AI techniques. Finally, the methodology for learning will migrate to a natural curiosity concept as opposed to the current repetitive approach.

[1]“Teachers of the Year - Thank You, Charlie Rose!”, Joan Brennan http://ezinearticles.com/?Teachers-of-the-Year---Thank-You,-Charlie-Rose!&id=1325161

[2] “Internet Live Teaching Solutions”, Gregory Demetriades

[3] “Issues and Trends in Curriculum - From Technology to Global Awareness”, Michelle Kawamura , http://ezinearticles.com/?Issues-and-Trends-in-Curriculum---From-Technology-to-Global-Awareness&id=1274066

[4] “Motivate! New Methods Exist That You Can Use Right Now with Unmotivated Students” Ruth Wells, Ruth WellsLevel: PlatinumGet much more information on this topic at youthchg.com. Author Ruth Herman Wells MS is the director of Youth Change, (youthchg.com.) Sign up for her free Problem-Kid Problem-Solver ... ...

[5] “Predictions Regarding Future Or Future Concepts”, Sarfaraz Ali ,
Sarfaraz AliLevel: PlatinumWho am I? Journeying through darkness, with a faint hope of leaving behind a spark - that would give reason to my existence. I often have ... ...
[6] “Is A Universal Language Coming With The New Age?”, Seth Garrison ,
Seth GarrisonMy name is Seth Garrison. I have been a certified Overlight facilitator since 2003, have a certificate in spiritual communication and am the creator of ... ...

Futuring Methodology - Needs based Analysis

Socio-Technical Futuring
Technology Prognostication

Issue: Provide a Socio-Technical Futuring forecast for the state-of- the-technical-art in the range of ten to fifteen years from now.

Background: Forecasting future technology is at best a very imprecise science. While our text has tried to establish some rules, it has also provides examples of how the current framework fails to meet the requirements for forecasting.
One of the best sources for forecasting of technical capability development has been science fiction authors. Reviewing the source of these relatively accurate predictions, there appears to be a direct connection to “need”. The characters and plots of these writings are required to meet new challenges and at the same time the mechanics of the writing require unique technical solutions to support the plot.
Science fiction tends to place characters, with all of the flaws and foibles of today’s humanity, in future situations with the task of overcoming and succeeding in the face of future challenges. This tends to distil the needs of the characters into identifiable patterns. Writer’s then have the choice of solving the needs of the characters in terms of technology or “character”. Since this genre tends to embrace technology, many of the needs are resolved by technology that is “the brain child” of the author. For example, in “ALIEN”, since the heroine is relatively weak compared to the creature, a robotic suit is envisaged to equal out the difference in hand to hand combat. This needs analysis approach forecasts a technology – robotic suits – that meets the needs of the character and is also practical for future technology development.
A second approach to needs definition in science fiction writing is the need for technology to support the plot and characters. For example, in the “Star Trek” series, the time required to move characters or respond to emergencies using mechanical movement would be too long to support the plot. The technology solution was near instantaneous movement of the characters over large distance using a “teleporter”. Another plot support example from Star Trek is the need for the Enterprise to be surprised by an adversary. The technology responding to this need was the “cloaking” device that allowed the authors to surprise Capt Kirk without making him look like an idiot.
It is proposed that perhaps looking at a need matrix can help generate more precise future technology requirements. Further, it would also tend to identify practical technology that has an already defined purpose.

Discussion: Needs analysis must be carefully constrained to allow for prediction of future technology requirements and concepts for this assignment. Table 1 provides an overview of the need analysis.

Table 1. Need Statement Overview
Need Area
Time Frame
Resolved need
Possible Technologies
Technology Prediction
+20 yrs
Quietly move to adversaries location
1 person transport, non-space travel, multi-terrain, quiet
Electric propulsion (H), anti-gravity (L),
Teleport (L),
New Technology Aircraft (H),

Density Difference (M)

Placing my character in the year 2028, he is faced with the need to fight an adversary across the world. He must quietly move across terrain similar to earth, including water, ice, ground, etc. On arrival he must face a physically more capable adversary.
Table 1. depicts the resolved needs derived from the situation. In this short exercise, the full development of resolved need process is not provided but a relatively repeatable process can be developed. From the resolved needs a set of technology is provided that could answer the resolved need. Each technology is assessed on the practicality of the technology being available in twenty years (noted as low, medium, or high probability of being developed in twenty years). The final column provides the technology prediction.
The need to be quiet eliminates the use of any current technology such as propellers, jets, engines and even wheeled type vehicles. A new technology is required that doesn’t rely on any current propulsion technology.

Conclusion/Prediction: Two predictions are made. First, anti-gravity may satisfy the need.
Anti-gravity will probably require a lightweight energy source. The cascading need should be developed in a similar pattern. For this example, it is assumed that development of carbon nano-structures will provide a high energy, efficient, lightweight energy storage capability.
The antigravity capability will need to support my character. The mechanism could be through a support structure like a skateboard or a suit. In this case since not all people are coordinated enough to use a skateboard a suit would be better. The idea of a full suit provides required support to the whole body. For example, an anti-gravity belt would require the character to hold his body straight while being picked up at his waist. Further, moving through gravity anomalies on earth will cause some “turbulence” which can be best countered using a suit.
The probability of this technology being developed is relatively low in the next twenty years since it requires a major breakthrough in physics for antigravity. The lightweight energy source using carbon nano-tubes is very likely.
I am predicting that breakthrough research will identify a way to create a bubble of lower density around a body. This will create buoyancy similar to a body in salt water. The buoyancy bubble will be used to lift a human in the air. Buoyancy will be manipulated to allow for keeping a specific height. Buoyancy will eliminate the need for a vehicle just the system that creates the buoyancy. Movement will be created by manipulating parts of the buoyancy bubble or by heating areas of bubble surface. This technology will require significant energy but that need will be met by the carbon nano-tube technology described earlier. The technology required to create the bouyance bubble has a medium level of difficulty in the next twenty years.