Sunday, August 26, 2012

Technological Extrapolation into the future

Extrapolating into the future is often times seen as a problem, primarily because so many past attempts have been dismal failures.  Some attempts have succeeded, while others have not done so at all. Some even posit a technological singularity beyond which no technological extrapolation will even be possible.  The problem behind trying to extrapolate into the future is that of not knowing what environmental conditions will exist in the future, and how human beings and their technological development will react to those environmental conditions. For example there is in the world today the concept of drone technology, it is seen as the coming thing when it comes to military technology, but is it really? It in the now when tested against various primitive cultures is seen as incredibly useful, however when put up against a more advanced nation is it as useful? That remains to be seen,  and so we can see the danger of thinking we can predict a trend into the future.  We get stuck in the now, and assume that with some improvements that now will hold true into the future. I am not going to say it is not a good way of looking at things, because mainly it is this method i.e. looking to trends holding true from the past, that has served us well,  and incidentally is the only way most of our science works.  Of course the drone tech example hasn’t had quite had so long a trend for it truly to be used as a useful extrapolation, but that is beside the point.

The point is that technology will always be imperfectly predicted into the future. But there are trends that are predictable, that represent what we would call laws of technological history. Laws which apply irrespective of environmental conditions or how we react to them. These laws are not things like Moore’s law which is a very specific example used to say we know what x  computer technological  improvements will be, but very general laws which should be obvious given some knowledge of history and scientific laws. First law, every technological system will always incorporate tradeoffs in its design and functions. This is not something based on our own reactions but a function of the multiplicity of natural laws which exist in our own universe, and so will hold true as long as any technological system exists within the scope of this universe’s natural laws. Incidentally this holds true for all life forms  as well.  An example would be a object designed to move in the air, will not move as well in the ocean, and vice versa for a object designed to move in a ocean. A person who wants to have superior strength, will have a problem when it comes to slower speed, say in gene engineering. A space probe designed to be really hardy say in the vacuum of space would not do as well in say a thick atmosphere.
Second law is as the complexity of any given object rises and its number of tradeoffs increases, whether technological or biological, so too do the number of vulnerabilities increase. This seems to apply whether or not you talk about our own technological development or the biological evolution of life. Complexity entails the cost of more vulnerabilities, it brings other strengths true, but not without cost. Which suggests a equally intelligent AI or human would have a equal number of vulnerabilities, if not the same vulnerabilities, just different ones.  Intelligence seems to come with complexity, so a intelligent anything would be more vulnerable than a unintelligent simpler one. Vulnerable in some ways, not as vulnerable in other ways. It’s often times assume a AI would be cheaper than its human equivalent but perhaps it would be equally as expensive just in a different way?, and require a different life support system but equally as extensive and require just as rare resources but different ones, and be as mortal as well.
This is not a new idea of mine nor original, I simply put the complexity rises concept forward, because it is so often forgotten in the technological circles. It is more often referred to as having more ‘failure modes’, the more complex any system becomes.  That is to say, the more complex it is the more ways it can fail. This is not to say that it will fail, just that it has more points of vulnerability. A complex system can be more adaptable, encompass more capabilities, but with that comes the price of more vulnerability. The best example of this would be that of a bacteria, and a human being;  a bacteria  is a very simple organism, yet can survive  extremes that would kill a human, being, it has fewer failure modes then a human being, and so it survives even in  the core of nuclear reactors.  A human being on the other hand is a much more complex organism, and has more failure modes, and can live in fewer environments. One is intelligent the other isn’t, which might be relevant; can a simple organism (whether technological or organic) be intelligent. Or is complexity tied up with being intelligent? If so the extrapolation of a intelligent AI entails that it would be more vulnerable not less, but perhaps in a different way.
Third law, and is tied up with complexity and tradeoffs; and that is any technological system will have some degree of inefficiency, and with that produce some amount of waste. The inefficiency is unavoidable, as it primarily a problem of tradeoffs between different environmental conditions, that whatever technology exploits.  You can make that technology more efficient but it brings with it less  capability. To use a analogy, the airplane can be made to go into both the ocean and air, but only at a tradeoff of decreased capability in both. You can make a submarine for one environment and make it more efficient, and a airplane for the other; bearing in mind you only make it more efficient nothing in the real universe is ever going to be a hundred percent. So anyway it is primarily at the interface between tradeoffs that you get the most inefficiencies.  So with the inefficiency comes waste, and  the waste itself might vary in any given technological system but it is always there. The waste itself takes the form of tradeoffs as well, to eliminate the heat, you might have to convert it to another form of waste. So any level will have to deal with waste,  it is just in how they deal with the waste that would vary and be unpredictable.
So to rehash, there are three predictable general laws that every technological civilization will have to deal with, no matter if they are past the technological singularity or not.  One every technological system will incorporate tradeoffs in its design and function. Two the more tradeoffs incorporated the more  ‘failure modes’ it  has. Three all technological systems are inefficient to some degree, and because of this they will produce waste in one form or another. There are probably more, but these are off the top of my head,  ummm oh yeah the more complex a system is the more inefficient.  Anyway I claim no  originality in these ideas, they are there our history, and probably in a great many books somewhere, just putting  them out because they tend to be forgotten in a discussion. 



No comments:

Post a Comment