Monday, August 20, 2012

Interstellar promulgamation of governance

Okay, Okay I wrote it a while ago, but its my first post so sue me, well don't I have no money for you to sue me but here it is copied from mi computer.

There are three planks that are needed for a successful and long lived interstellar civilization. All three deal with the problem of time and space, and the vast distances of both. Put simply no civilization lasts, and over the truly huge amounts of time and huge amount of space, even the hardiest civilization cannot last. This in general will always hold true, but it can be mitigated somewhat by three effective solutions counteractive to the problem of lack of permanence. Why is lack of permanence important, it is because that many of the projects that we have in mind, or that any interstellar government might have in mind would involve large time scales, and large space scales. Terraforming for example requires in a best case scenario a minimum of a thousand years (in the case of Mars), and for other planets this time scale would be even larger. Other examples of problem of lack of permanence deal with concepts such as the generation ship concept, which in some cases would require thousands of years to transit to the nearest star in the best case scenario, and worst case millions of years. Still other interstellar travel topics would also require some degree of coherence over long time scales, such as for example the laser boosted sail, or various environmental mitigation. The problem of lack of coherence and permanence can be also be illustrated fairly well in other non areas of space, for example the field of artificial intelligence, and concept of singularity. Much in the techno world  has been made of eventually  artificial minds eventually designing super fast thinking versions, which in a best case scenario  solve all the  world’s  problems, in a worst case well it’s not worth thinking about, and in any case not relevant to the discussion of permanence. Permanence incidentally is  not my topic but one  I read about in  a book.  I give full credit in this concept to that author, I am expanding it a bit though. 
Anyway the problem of permanence applies as much to fast thinking AI’s as much as it would to a interstellar civilization’s governing problems. Put simply the AI’s fast evolving, fast  thinking speed, actually opens up  the same problem of vast  scales of time and space, simply because the perceptive  time has  increased ,  translation the faster a AI thinks and the more offspring it develops that are even faster, the  worse problem it has in interacting with the real world and indeed doing or thinking about anything  that even approaches even our human short time scales,  because eventually  it is thinking so fast it can’t work over the equivalent of thousands and possibly millions of years of subjective time.  And in addition it also probably leads to the problem of nowatitus, essentially instead of existing over  the sum total of that time frame, say a few hours our time, several millennia its time, it only lives  within a say a century it’s time. This leads to similar problems of coherence that any long thinking government or long projects would face, in that how do you solve problems over truly long time scales, how you maintain a government or maybe just a motivating force over the truly long time scales that are needed for truly significant projects. The dreams of terraforming various worlds,  while technically  might be quite sound, are perhaps  not possible because we and indeed our children in  the  form of AI’s,  simply think too short term, and in many cases  are willing  to  change to quickly to do these projects.  Generation ships currently represent the most possible technically in the field of interstellar ships, but they run up against the problem of human limitations on coherence or motivating force over periods of centuries and eons.  Put simply you can send X crew to the nearest star, but cannot guarantee that they will colonize the star at the end of the mission, or even that they might have not regressed to a primitive state, as the scale is such to have any civilization long since collapse. Its again the same problem with terraforming, which would take equivalent time scales to generation ships, and require similar coherence over long time scales.
How to maintain coherence, how do you build an interstellar government in the real world in which even in a best case scenario, would require decades to centuries to reach the nearest star. Many would say it cannot be done; I dispute that notion as I see it as a problem of  design, because as I see it government would require three main planks.

The first two are technological, and the third is sociological. First you need a fairly fast system of transit, as close to the speed of light as possible. The faster you go the easier it is to  maintain  coherence over long time scales, cultural information can be propagated, and in general the colonies have less room to diverge simply  because the vast amount of time separating them  from the home government is  significantly reduced.  In general the faster you go the easier it is to maintain coherence, the slower you go the harder it is to maintain coherence.  Also a addendum, using higher speed ships, it also makes coherence on terraforming much easier, as it allows for propagation far into the future much more possible, simply because the faster you go the slower time becomes, which means even with the civilization which made it gone, would mean a terraforming fleet could still keep terraforming a world, and keep jumping forward in time, until it has reached earthlike levels, and then colonize it.  An addendum to this addendum, High C ships also make it possible for a suitably smart civilization to effectively colonize the future, knowing the approximate length of time of civilization or own civilization will last, and then extrapolating a little extra time, means that a fleet of ships could be deliberately sent into the future, not to colonize space, but to extend the life span of a civilization out beyond even the projected impermanence of the home civilization. Of course this tactic would have as its own disadvantages, the most obvious being something happening to the ships in transit, and also eventually things like the expansion of universe leading to the end of the party in general.
The second plank is life extension technology, in general the longer lived would contribute to a much needed conservatism in the field of change and propagation. Put simply contrary to  many popular techno thinkers, it seems more likely that with life extension people would be more conservative as time goes on,  in addition  it  would also at a individual level lead to more of a selfish desire to see projects  get finished. Why do I say it would contribute to a much needed conservatism, most in the techno field prefer progress, and change, and always look at it as a good thing. It can be, it also can’t  be; slowing down the rate of change and progress using life extension would allow for better decisions about truly  long term  projects to be made. It would also slow the rate of social change, which adds to its advantages for serving as a plank for interstellar government and long term projects.

The third plank  is as I mentioned sociological, and  it unfortunately  represents the most unknown of the three ‘technologies’, unknown because we know more about life extension and propulsion technologies then we know about ourselves and intelligence and in general, and maintaining coherence over long  time scales.  It is I think going to be a motivating force for action over long time scales, it would not be government, but could serve as the foundation for one itself, it would be a background motivating force linked to the family level of humanity. Put simply it would be a religion.   Why a religion, many in the more rational or atheistically inclined would shy away from thinking anything good about religion, most see it as a bye gone holdover from an age of irrationality. However I disagree, it has a purpose even in today’s age, and will have a even more important purpose in the farther future. That is to serve as a motivating force for action over long time scales. If there is one advantage(I actually see  other advantages such as helping fellow man and the like,  back up government  but  this is the part I find  most important) that I as rational would respect it is the essential  longevity  of religion, and its  capability of surviving  when even the original  civilization which birthed it is long gone. But it kills people,  but it does this  and it does that, its irrational; well yes some of that  is  true, and some of it can be argued  as to whether  or not religion actually does it or it’s a function of  the human condition. But anyway, it has served as a motivating force throughout western civilization, a coherence that is sadly lacking in any governments that have ever existed, and conducted projects which extend for centuries, and has for  example in the  case of the Jewish faith propagated a message from  far, far in the past. Specifically Jews still hold to the pork prohibition long after any of the original reasons for creating it exist. That is an astonishing capability, Think about it over two thousand years and Jews still hold to their pork prohibition. What if we could do the same in our current era with environmentalism, or terraforming or other long term projects such as maintaining radioactive storehouses? So I propose a essentially rational religion, which would allow for long term planning and coherence over truly long time scales, but still as history as shown also allowing for some progress, it is best if this rational or irrational religion (I say irrational because it would necessarily have to incorporate irrational concepts into it to function equivalently), incorporate preexisting religions, but with a eye to goals, such as terraforming or generation ships. Why because best  not  to reinvent the wheel,  also  because  drawing  upon the  tried and true  religions  means we already  have robustness in terms of survivability previously  demonstrated. This motivating force would make it much easier for government to function across interstellar distances because of shared similarities of culture and goals such as terraforming x  world, or maintaining  x laser boost station.

Now why not one or the other, why not only one or two? Because of the problem of stress on the system. All of these three planks will eventually reach their limits when pushed over large scales of space and time, but combined, they allow for lots of coherence, and less stress on any one plank, and create more robustness in the system. Needless to say with deficiencies in capability for any out of the three, more stress is placed on the other system. In other words the slower the propulsion method, more stress, the shorter the life  lived more stress,  and the less effective the motivating  force even more stress, contributing to the eventual day of a collapse of coherence.

Now how does this apply to  AIs,  it applies because it might figure into the similar solutions, one AIs require a motivating force over long time  scales, two they need to slow the rate of change by effectively extending their own lifespan, so continuity of personhood is needed, and lastly some version of high-speed propulsion equivalent, communication between the various orders  is needed to maintain coherence,  or alternatively a system of reestablishing versions of the AIs which force coherence again perhaps programmed into  the basic architecture and not changeable in other forms.
Other than lack  of coherence this is also important in a attempt to prevent  AIs from evolving into less useful  forms, and allows for more directed evolution,  just  as in  the case of a interstellar government  it allows for  better coherence.  Evolution is often times  viewed  at in the case of those in  the techno world as a simple process of progress, worse to better, from less intelligent to more intelligent; however  this  is not the case and  wouldn’t  be  the  case in  the AI evolution either. Evolution is quite simply the adaptation to existing conditions, and it changes depending on those conditions, it does not respect intelligence or any adaptive capability other than what is important at the moment in assuring survival. What this means is that evolution could very well shift into what we would view as simpler forms or more harmful forms, or just plain stupid forms, we as species pride ourselves on intelligence but the truth is that viewed at in a strict Darwinian terms it is a temporary survival advantage at best (if indeed it can be viewed as a survival advantage at all given the current many traits it puts us in with regard to nuclear war or climate change).   We will return to the AI evolution problem later but first a kind of analogy. The analogy is this, in the current world we face the problem of gene alteration on various species and in some cases the alteration of our own species.  It is viewed by many as a good thing, because it is seen as a way to address the problems of hunger and adaptation to climate shifts for crops and the like. In the case of our own species it is viewed as a way to make us superhuman or curing various diseases. However already we are seeing problems of it creating the possible problem of super weeds, with crosspollination of other natural variants, or possibly wiping out the natural variants on which our entire gene engineering industry is dependent on. Those in  the techno world think of gene engineering  as something independent, you alter traits,  and that’s it, but  they never think of how those traits are altered, gene engineers do  not create whole new traits,  they  transfer traits from existing stocks and  natural variants in so  doing they create hybrid versions. But without those natural variants and existing stocks there is nothing to transfer from, and thus gene engineering exposes its limits as the panacea it is viewed as. You can look at the natural world as a adaptive library, without it we have no way of checking out traits to even do gene engineering at all.  Look  at the information on the destruction of  seed  banks,  and natural variants, and see if  gene engineering  can  do  anything  in this  light.
I appear to have diverged somewhat from the topic, apologies, getting back on track. The alteration of species and perhaps ourselves is always viewed at in short term manner. X trait is important in the now, and it is not viewed at in the later. We often times view this trait in the positive because in the now it is very useful, but we as a species are not very good at viewing it in a manner that identifies its goodness in the long term. DNA and RNA are incredibly hardy carriers of information; they have existed in one form or another since the dawn of life on this planet. What this means is that by altering genes in our own species or other species, we are effectively imprinting traits that will also be realized over many millennia and even millions of years. We are creating traits which will not have their full expression until way beyond our current limitations on perspective. We think this is valuable now, and it  might  be in  the now, and it might be some time fairly  far in the future, and perhaps by then  we have decided this set of traits is what we need only, so  we eliminate  the old less useful traits. But then we come up against a really nasty problem, those traits which have for many years seen as so useful are now really hazardous, and we have burned our bridges so we cannot even try returning.  Indeed those hypothetical traits might even keep us from realizing there is a problem until it is too late, paradigms and traits constrain thought in certain ways. Being really smart in one way for example might seem really good but it also might keep us from looking at a problem, because we do not see it at  all, but the reality is our not  seeing a problem does not prevent  a problem from existing, it is just  that we edit the  information  of the world  in  such a way as not to  see it, and it is still dangerous despite  our not seeing it for the moment.  
So after that incredibly long and probably boring analogy, what does this mean to AI evolution?  It means that problems can take longer than the short term to appear, and can appear in quite long time scales.  Whatever intelligences are created  by faster thinking creatures would  appear to evolve  quite quickly,  and  from their perspectives they might  create useful intelligences that would be  very good, but  beyond their  own limited fast time perspectives, they might like humans,  create  traits that are good in the now of their own, and might seem good for themselves for quite some time, but evolution as mentioned does not respect  intelligence.  What is creation of such intelligence leading to, it is often times assumed that such fast intelligences would know what they would be doing, and perhaps given their own perspectives they would, but over a subjective time of centuries or eons, or millions of years would this hold true?  In the previous section we talked about coherence, and extension of selfhood, if  a AI only really lasts itself  say a century in  that  fast time,  what does that do to psychology,  we know our own lack of continuity contributes  to a coherence problem, wouldn’t it equally well lead to the same  for a AI. Not lasting doesn’t mean death in this talk, it means change, but the results are the same,  in that they shift and change to their own unique current environmental circumstances, and thus  the old  now ‘dies’.

Again I diverged, apologies, what does this now mean in context of AI evolution, it means that perhaps just maybe their evolution would not be towards greater and greater intelligence, but their equivalent of our bacteria, the simpler more adaptable forms, simply because those are the ones that have the best traits, the ones that survived long enough to interact with our world.  Or maybe those who change the most self eliminate themselves from their version of Darwin, because over the long term they change to quickly and so cannot understand the full consequences of their decisions,  but  when played out those changes which to them  seemed great at the time, killed themselves off when  it played  out over subjective eons of  time. Also from our perspective a AI might for example might come  up with solutions which seem fine over its time period, but not so fine over longer periods, in other words a faster thinking AI  would have the same problem  with shorter  termitist that we ourselves  have to our own world. 
Directed evolution both  human and AI often times carries  the peril  of those traits seeming  to  be fine in  the  short term, but being quite dangerous over the long term,  and this needs to be considered in the context  of the discussion over their goodness, and also  over the purely pragmatic discussion  on  how  to  do it in the first place. We must think over the long term when considering changes to ourselves, we must not one burn our bridges; nor should our possible AI children; we must not  be  so taken in  by the seductive allure of various possibly godlike traits when those  traits might not be so good over the long term, and again neither should our AI  children.  There will probably be a response to this somewhere along  the lines of well they won’t  they’re too smart, or technology comes  to the best solution, but the facts of our history should show  that it is not predestined, it is often times planned, it should not be  expected to just materialize,  it  should be created, and forced into  existence, because if not  the environment will indeed determine the solution, it just won’t  be  the  best solution, certainly  not  for our  own  species  or  our  children AIs.  Smartness can  often times have its own  blinders, we must ensure that we see in as many directions as possible,  and so ensure our survival into the far deep  time of  the  universe, and not just  assume that such survival is predestined, but ensure it because we create  it, because we see the reality  and  problems  for  what they are,  and plan it  over  the long  term.

No comments:

Post a Comment