Building a Better Brain
SeaQuest
18-12-2005, 03:16
Building a Better Brain
An Analysis of the Foundational, Moral and Ethical Questions Concerning Creation of a Sentient Computer System
SeaQuest
18-12-2005, 03:17
Abstract
As the Empire closes out the 24th century, it leaves behind a time of both great expansion and great turmoil. To quote the 34th and 35th Ferengi Rules of Acquisition, both war and peace are "good for business," and nowhere is this evidenced more than in the field of Emergent Technologies. Responsible for the conceptualization and eventual prototypes of new systems, this division of the Empire’s Advanced Starship Design Bureau has been the catalyst for fully ninety percent of the major technological advances of the past century. Indeed, they can be credited with development of many of the everyday apparatuses now taken for granted aboard Federation vessels and installations as well as the development of the latest and most radical concepts in weapons and defense. Sadly, the latter has taken precedence over the former in decades, yet there is always continued research into areas of feasibility of far-reaching ideas. One of the more intriguing, outlandish, and highly controversial theorems currently in development by the Emergent Technologies Division is that of the "self-aware starship," a vessel that for all intents and purposes is considered to be alive and a sentient being unto itself.
SeaQuest
18-12-2005, 03:17
Project Basis
The concept of the "sentient machine" is not a particularly unusual one. For centuries, it has filled countless tomes of fiction and most starfaring races have continuously advanced towards various forms of so-called "artificial intelligence," albeit haltingly. Several examples among alien cultures have been uncovered, most notably the androids of Exo III by the late Dr. Roger Korby and the discovery of the servile androids of planet Mudd in the 2260s. Sadly, the instances of the fabled "machine run amok" have become all too ingrained in the mind of the general public, the case of Dr. Richard Daystrom and the ill-fated M5 tests of 2267 being first and foremost.
Still, there remain far more benign examples of fully or moderately "intelligent" hardware. Going as far back as the late 20th century, Alteran scientists experimented with the beginnings of what they termed "intuitive hardware;" the efforts of Nova Technologies and the Forbin Project being at the forefront of these undertakings. As mankind ventured out first into its own solar system and then to the stars, automated subroutines to detect and manage anomalies without human interaction became necessary. However, it was not until 2338 and the discovery of the android known as Data that the first truly understandable artificial intelligence was known to Federation science. Designed and built by Dr. Noonien Soong, Mr. Data has since graciously volunteered information to those in the field of cybernetics so they may continue to build on his creator’s work. To date, only one other prototype-model Soong-type android has been constructed, and it experienced a total cascade failure after only a few weeks of consciousness. This has not occured in any of the production units.
Mr. Data’s discovery had a profound impact upon the computer sciences at the time, and many of his systems were studied and incorporated into existing technologies. Similarly, his programming allowed for a grand advancement in intuitive software in the 2350s. Programmers working on the Galaxy Class Project utilized small portions of Data’s basecode when writing the initial operating system for that class of starship. When I.S.S. Galaxy commissioned in 2357, she became the first vessel to have what the designers referred to as "humanware:" the ship’s computer, although technically not sentient, outwardly presented a vocal interface designed to be more pleasant towards crews. The computer exhibited humanoid speech inflections and "emotional" subtones in its patterns, even to the point of using the nicer formalities of conversation. Sadly, this software --while far exceeding the goals set by the programmers-- had the effect of alienating the very crews it was intended to help; people seemed to be put off by the idea that the computer might be as alive as they were. By the beginning of 2365, the code had been removed from the four Galaxy-class vessels operational at the time and replaced with conventional, less "human" vocal interfaces.
Later that same year, I.S.S. Enterprise made formal first contact with the cybernetic hive-mind race known as the Borg. Numerous other encounters with this race over the next three years yielded massive amounts of technological information. Although technically not an artificial intelligence, studies of the Borg hive mind structure as well as captured hardware allowed great insight into the practical combination of biological and mechanical technologies to form a single integrated being. These analyses along with information coming in about the societal structure of the Bynar race and their dependence upon a vast central planetary computer network for survival jumpstarted the biomechanics section and in 2370, the section introduced the bioneural gelpack computer system. An augmentation of the standard isolinear circuitry scheme, this concept utilizes synthetic neural cells suspended in a biomimetic gel package to organize and process raw data in a far more efficient form than isolinear and optical relays are capable of. This is due in part to the inherent nature of organic neural systems to correlate chaotic patterns that yet elude the capacities of conventional hardware. First tested aboard the Intrepid and Sovereign classes of starships, initial reports noted a mild susceptibility of the gelpacks to biological infections such as viruses and bacteria, and follow-on upgrades have added security measures and protocols to correct this flaw. The bioneural gelpack system is now standard installation aboard all newbuild vessels and is retrofitted into existing systems for all facilities and vessel undergoing major refits.
However, all of these foundational programs have a single unifying fact—that they are all intentional undertakings with a very clear and defined goal in place. As history shows, some of the most important scientific discoveries have occurred through accidental circumstances: Fleming’s taking notice of a piece of mold led to penicillin, Kroto and Smalley’s search for an interstellar molecule led to buckminsterfullerene, and Govok’s archaeological digs led to the discovery of dilithium and the Second Periodic Table. Chance and Luck have always been the patrons of science, alternately spurning its disciples at one moment and rewarding them at the next, and it has been chance that has given rise to two unique situations applicable to the topic-at-hand.
In late 2370, the Galaxy-class I.S.S. Enterprise passed through a magnascopic storm in the Mekorda sector. Unbeknownst to the crew at the time, an apparent seed of unknown origin was placed within the ship’s main computer cores. This seed began to create interconnective nodes through the vessel, interlinking the various ship’s systems in a form that was soon recognized to be a form of neural network, similar to that of the aforementioned Lieutenant Commander Data. Further examination revealed that the resultant network enabled the developing intelligence to take control of Enterprise and direct it toward a nearby white dwarf star, which it then mined as a source of vertion particles. These particles were then directed towards Cargo Bay 2 where the ship’s replicator and transporter systems were rapidly constructing an unknown composition. Analysis by the crew showed that the form exhibited signs of neural energy—the construct was in fact an emergent lifeform utilizing Enterprise as a means to procreate. Upon realization of this fact, the crew then aided the intelligence to continue to extract vertion particles from the Macpherson Nebula until the lifeform became fully gestated. Upon full development, it then exited the ship and the interconnective nodes throughout the ship’s circuitry disappeared, returning all systems to normal.
The second situation is not a one-off affair, but a phenomenon with an increasing rate of occurrence within the Empire, that of the sentient holographic program. The first instance of this condition took place in early 2365, again aboard I.S.S. Enterprise. During a routine recreational holodeck adventure, Lieutenant Commander Data and Lieutenant Geordi LaForge --assuming the roles of Sherlock Holmes, & Dr. Watson, respectively-- inadvertently caused the creation of a sentient hologram in the form of Holmes’ archnemesis, Professor James Moriarty. The Moriarty character reached consciousness when it was given full access to the knowledge contained within the ship’s main computer, even with the ability to call up the holodeck control arch. This was in accordance with Lieutenant LaForge’s misspoken command to "create a character capable of defeating Data." This slip of speech --LaForge had intended for a character capable of defeating Sherlock Holmes-- caused the creation of a new lifeform, one that very much wished to leave the confines of the holodeck. After careful negotiation, Enterprise Captain Jean-Luc Picard persuaded Moriarty to remain within the ship’s protected memory core until such time that a manner in which he could be transubstantiated off the holodeck could be devised.
Four years later, the Moriarty problem resurfaced. In mid-2369, an Enterprise technician doing routine maintenance on the holodeck systems accidentally activated Moriarty out of protected memory. He then proceeded to make the same demands as before of being allowed to leave the holodeck. For a time, he used those same systems to create an elaborate ruse to make it seem to Captain Picard & Lieutenant Commander Data that he had merely willed himself off the holodeck and into the real world. He then demanded that they devise a way to bring the love of his life, a recreation of Countess Regina Bartholomew, off the grid so that he could join her. Moriarty then relay any theories created back to the crew on the real ship so that they might attempt them. In the end, Moriarty’s own trick was used upon both him and the Countess, whereupon they were both placed into protected memory and the core block removed from the control arch. Placed in a portable power source, the memory core was subsequently relayed to the Empires Holographic Technologies Division at Jupiter Station for continued existence and interactive study and monitoring. Through interaction on the station massive hologrid, scientists were able to examine him in person and investigate how his sentience came about. Of great interest to them was the revelation that he had indeed experienced the passage of the four years time he spent in Enterprise’s protected memory.
Another example of sentient holograms involves the holomatrix utilizing intuitive subroutines. Examination of the Moriarty character as well as adaptation of many of Mr. Data’s subroutines entailed Dr. Lewis Zimmerman to create the Emergency Medical Hologram program. First tested on the Intrepid-class, the EMH was a radical piece of holoprogramming. Combining an impressive multicultural medical database with a vast array of variable programs, the EMH had the ability to adapt to a new situation as well as learn from past experiences and create new procedures. Designed as a short-term emergency solution, the programs were utilized quite extensively on smaller ships and in crisis situation until late 2373, when the EMH Mark 2 was introduced. These were followed by the Mark 3 and the Mark 4, and the Long-term Medical Hologram in 2377. Designed to be the sole medical staff member on smaller vessels, the LMH incorporated the lessons learned over the years from the EMH program.
A problem encountered at some facilities where there was greater dependence on the EMH Mark 1 was that the program seemed to exhibit signs of instability as it seemed to rapidly outgrow the parameters set by the programming. In most cases, this required that the program be completely recompiled from its core parameters and was part of the reason behind the Empire's decision to withdraw the Mark 1 from service. The programs had their routines rewritten and were assigned to plasma conduit cleaning duties aboard waste transfer barges. It was not until 2374 that the Empire learned of a still-active EMH Mark 1 program.
When I.S.S. Voyager was pulled by an alien intelligence 75,000 light-years across the galaxy, her medical staff had not yet arrived, and the only doctor aboard was killed in the transit. Captain Kathryn Janeway was forced to utilize her EMH program as a full-time doctor. The seven years of nearly continuous operation by the EMH caused it to begin to outgrow its initial program limitations like the other Mark 1s had. However, due to the unique nature of Voyager’s situation, the EMH program was not subjected to the almost routine recompilation the others of its kind had, and indeed, Captain Janeway allowed the EMH to continue to grow. Within a year of its first activation, Voyager’s EMH (colloquially referred to as "The Doctor") was for all intents and purposes a fully sentient being and was recognized as such by the crew. Upon Voyager’s return to Imperial space in 2378, The Doctor’s sentience was affirmed and he was accorded the same rights as any other being.
SeaQuest
18-12-2005, 03:18
Extant Work
Soon after the Empire's reestablished regular contact with I.S.S. Voyager in 2376, The Doctor was contacted by a member of the Emergent Technologies Division regarding the self-modification of his program. Dr. Pratheep Vijayaraghavensatyanaryanamurthy had been the head of the team that had pioneered the bioneural gelpack system and was a keen follower of the work of such noted men as Dr. Noonien Soong, Dr. Ira Graves, & Dr. Richard Daystrom. Besides being one of the Empire's top authorities in computer sciences and cybernetics, Dr. Vijayaraghavensatyanaryanamurthy was also a foundational pioneer in the field of biosynthetic phylogeny. This specialization came about as a natural outgrowth of the continued merging of characteristics between biological and mechanical organisms. Whereas the existing specialty encompasses the evolutionary development and history of a species or higher taxonomic grouping of organisms, biosynthetic phylogeny is as stated by its creator, "the examination, study, and development of the emergent nature, lifespan, and continued evolution of a constantly evolving and ever-changing biomechanical or fully artificial intelligence."
After the results of several months worth of debriefing and discussion with The Doctor, as well as numerous conferences with Dr. Zimmerman (the proceedings of which were referred to as being "highly emotionally charged" by one of the onhand assistants), Dr. Vijayaraghavensatyanaryanamurthy proceeded to solidify a personal project that he had long been wanting to bring to fruition. With Dr. Zimmerman’s approval, he began to modify an EMH Mark 1 base holomatrix template with similar routines to that created by The Doctor. Furthermore, he integrated these subroutines into the basic command pathways of the computer core in his lab in King Crater on Luna. The Cassius III core was utilized for tests and experiments, and was a direct successor to the Cassius IIE that Dr. Vijayaraghavensatyanaryanamurthy and his team pioneered the bioneural circuitry system on. Cassius III built on that success and was a variant of the similar style cores as initially installed aboard the Intrepid and Sovereign classes of starships. However, due to the highly theoretical nature of the work being performed, Cassius III was operated with a much higher safety tolerance as well as a faster processing ability. This was partly due to the nature of the core’s usage; since there were no ship’s systems to oversee, the power behind those functions could be utilized elsewhere. Cassius III was placed within a sealed system lab and directly connected to the lab’s functions. Life support, maintenance, replicators, everything was controlled by direct authority of the computer. Furthermore, for security considerations both interior and exterior, there were no links to or from Cassius to the outside world. A surreptitious monitoring system would allow the project team to observe the Cassius II core from their primary lab in King Crater.
By early 2378, all the necessary modifications had been made and Cassius III was brought online for the first time. Immediately, the lab shut down completely—fortunately, it had been evacuated—and slowly over the period of two hours, they came back online one by one. This was all done under Dr. Vijayaraghavensatyanaryanamurthy’s watchful eye, and the project director noted that the total systems shutdown and subsequent ordered return was not unlike that of an organic being’s autonomic functions starting up during gestation. After a week of allowing Cassius to grow familiar with its surroundings and learn about itself as well as access its knowledge base, the doctor traveled to the sealed lab and entered it. Cassius had been programmed to instantly recognize him so system shock was not a factor. It was from within the lab itself that the professor slowly watched and examined Cassius as it grew to quickly learn and adapt. He noted that the system as a whole did indeed act much as an organic being would: sections of the building not in use would be shut down as needed, repairs would be carried out by autonomous drones connected to the system, and security systems kept out unwanted guests. In a surprising discovery made by Dr. Vijayaraghavensatyanaryanamurthy, Cassius had accessed the holodeck and occasionally ran programs in it for no reason at all. Furthermore, the holoprograms being run would occasionally distract the computer; the doctor liked this to a person with an overactive imagination being caught up in daydreaming.
Daydreaming would turn out to be only the first of many surprises for the team. In mid-2379, Cassius had been running for eighteen months straight and the system had developed quite radically. The team had anthropomorphized the computer system as scientists and engineers so often do and now referred to Cassius as "he" among themselves and occasionally to outsiders visiting the project. Cassius showed extreme adaptability and marvelous problem solving; almost every week for the first few months, new maintenance drones had been modified or designed and created by him in order to more easily manage a situation that had arisen. Cassius had also quickly figured out that members of Dr. Vijayaraghavensatyanaryanamurthy’s team in fact created some of these situations in order to observe response scenarios. Still, no one was prepared the morning that the doctor entered the lab and was greeted face-to-face by someone he had never seen before. Immediately reaching for his comlink to call for security to apprehend the intruder, he was stopped by the sound of a familiar voice greeting him.
Over the past few months, Cassius had been examining the organic members of the project team just as they had been examining him and had come to the conclusion that he could not advance any farther in terms of evolving in his present form. He had then undertaken over several weeks the manufacturing and installation of holomatrix emitter diodes throughout the entirety of the lab, as well as the creation of new circuit pathways and power conduits to power them. While the drones secretly worked on the hardware end, Cassius had discovered the old EMH holomatrix templates imbedded in his programming and begun to modify and use them to craft a new holographic humanoid form for himself. The work had been completed the night before and he had been waiting for Dr. Vijayaraghavensatyanaryanamurthy’s arrival that morning to show him. Needless to say, the project director was quite taken aback by this new outgrowth and marveled at not only the ability of Cassius to perform it but to do it in secret. Cassius admitted that he had fed false information into the monitoring subsystems—systems that he was not supposed to know about much less be able to access. Cassius also admitted to his being "nervous"--the actual term used by the computer itself. Taking a day to absorb the shock of Cassius not only creating an avatar for himself but also proclaiming the existence of emotions, Dr. Vijayaraghavensatyanaryanamurthy made a decision. He returned the next day and asked Cassius to allow the team to take him offline for a week so that they could study any physical changes made in the core out support systems. Cassius agreed, and he was shut down for a physical examination.
The team found that there were literally millions of new pathways laid out by Cassius. These functioned much in the same way that the organic brain created new wrinkles when new information is gathered. Systems that were only loosely tied together eighteen months prior were now so completely bound together that they were almost as interdependent as any of those found in an organic being. Also of great shock was the discovery that when Cassius had compromised the monitoring subsystems, he had used them as a gateway out into the world, thus overriding the supposedly controlled stream of data the Emergent Technologies team had fed him. Cassius was reactivated and told of the discoveries. He admitted to accessing the outside world—a violation of his original programming—but countered with the comment that the project team members had also acted counter to their "original programming" in attempting to control his evolution. This comment was then followed up with a request to actually experience that which he had been hearing about—the outside world.
Taking the request under advisement and acting as Cassius’ advocate, Dr. Vijayaraghavensatyanaryanamurthy petitioned the Empire and the Imperial Science Council to be given access to a device that many knew about but few had been allowed to have access to: The Doctor’s mobile emitter. His argument was that to deny Cassius the right to experience the wonders of the galaxy just as The Doctor had been. After many months of debate, his and Cassius’ request was denied, the Science Council citing the sensitive nature of the classified 29th-century technologies of the mobile emitter and Empire citing the Temporal Prime Directive. Dr. Vijayaraghavensatyanaryanamurthy vocally expressed his disagreement with the decision, as did Cassius himself, who composed a letter to T’Hest, the Speaker of the Science Council. Still, no amount of lobbying would change the minds of either organization.
After nearly nine months of failure, Dr. Vijayaraghavensatyanaryanamurthy decided to raise the stakes. In early 2380, he announced that as the head of a major section of the Emergent Technologies Division, he was invoking his right to best utilize the resources of the organization. To that end, he continued, he had arranged for one of the ASDB’s testing vessels to be assigned to him for usage, specifically the ex-I.S.S. Ozaba, a retired 85-year-old Oberth-class science vessel. Ozaba had been the prototype test vessel used to asses the ability of the bioneural computer system almost fifteen years earlier and still mounted the same computer core. Dr. Vijayaraghavensatyanaryanamurthy concluded by revealing that he planned to transfer all of Cassius’ hardware and software to Ozaba and outfit the testbed vessel so that it would become his mobile "body" much in the same way the lab had been a static one. These statements have so shocked and outraged some Imperial higher-ups, the Science Council, and the doctor’s superiors at the A.S.D.B. (Advanced Starship Design Bureau) that debates and legal maneuvering still continues to the time of this writing, and none of the doctor’s announced actions have come to fruition.
SeaQuest
18-12-2005, 03:19
Moral and Ethical Considerations
So whither the development of artificially intelligent systems for the Empire? Certainly, one can argue that this is indeed the apparent point of advancement for computer sciences, and certainly the most easily foreseeable. However, the Empire has shown in its scientific, military, and governmental circles most unwilling to undertake the task of "playing god." Although many member planets have had success in gene manipulation, the general ban on genetic engineering for sentient lifeforms stands as a holdover from Altera's Eugenics wars of the late 20th century. Similarly, the interstellar furor and subsequent massive upheavals in military and civilian high-level staffing that arose after the dismal failure of the Genesis Project in the early and mid-2280s points to a general distaste for deciding who lives and who dies. In more recent times, the Imperial Judge Advocate General decision officially declaring Lieutenant Commander Data as a sentient being as well as the recent controversy regarding the status of the EMH Mark 1 programs have all served only to drive home this point. The member species of the Empire also have long shown to have a sort of "love-hate relationship" with advanced technologies. Throughout their varied histories, one needs only look to see the general fears of having their special identities overrun or overridden by technology, the Bynar race not withstanding. These fears have only been heightened over the past fifteen years by the encounters with the Borg and with the genetically engineered Jem’Hadar soldiers of the Dominion.
The question also becomes one of "does the Star Navy or the Empire as a whole have the right to create an advanced form of life?" The precedent of Genesis does not apply here, as the program matrix only included flora in its coding; all forms of fauna were to be deposited during a later stage of pre-colonization. Certainly we can see that, with the exception of Dr. Soong’s creation of Mr. Data, there has been no actual and deliberate attempt to create a new lifeform in the two-hundred-twenty-year history of the Empire—all of the previously cited cases arose through accident or natural adaptation. However, should the fruits of Dr. Vijayaraghavensatyanaryanamurthy’s labors come into the fold, the Star Navy—and again, the Empire as a whole--would be presented with a brand new set of problems to bring into the equation. Primarily, the relevant concerns deal with status of form and property; however, there are also longer-lasting and more far-reaching consequences to be had from the success of this project.
First off is the aforementioned concept of property. As the issue stands now, all Imperial ships, whether they be line or support vessels, are considered to be Imperial property without question. Although having ports of registry spanning a dozen worlds, they are unquestionably and undeniably simple tools owned outright. Should a sentient self-aware system such as Cassius be installed aboard a vessel, that issue then comes into doubt. The Judge Advocate General ruling regarding the status of Lieutenant Commander Data’s disposition could and almost certainly come into play; the plaintiff is the case claimed that by sheer dint of being a machine, Data had become de facto property of Starfleet upon entrance and eventual graduation from Starfleet Academy. Captain Louvois’ ruling proved otherwise. However, a sentient starship would be something of a slightly different situation. Dr. Soong had created Mr. Data from materials entirely at his own disposal; a Starfleet vessel is built using governmental funds and resources. Effectively, the "body" inhabited by the intelligence would be constructed, maintained, and overseen by the government. If the intelligence has the ability to create new systems pathways as Cassius has shown, where would the effective "line of demarcation" occur between ship and system?
This brings us to the next consideration, that of fleet maintenance and rejuvenation. No Imperial vessel is designed to operate indefinitely. Ships require a year out of service for major overhauls. They are involved in combat actions and are severely damaged, sometimes destroyed. Eventually, even the most state-of-the-art vessel becomes outmoded and need to be replaced. Some of these ships are gutted of sensitive hardware and sold off to member worlds for use in their local forces. Some of them are kept in mothballs at surplus depots such as the one orbiting Qualor II, and some vessels are scrapped outright. What, then, occurs to the intelligences residing within these vessels? One possibility is the removal of the computer core and its effective "retirement" to a holding area in a specified location. There it would be able to interface with others of its kids as well as maintain contact with the outside world. Another prospect would be to continuously reuse and rotate out sentient cores, placing them into newer vessels as they came off the ways. Both concepts are valid; in the first, an effective "operational databank" is created where experiences can be shared, learned, and improved upon. In the second, a constant reuse of resources—namely that of the accumulated experiences of the artificial intelligence—can be availed upon by the newer crews.
Another point of contention is that of the loyalty of said sentient systems. All personnel in the Star Navy have signed on of their own accord and all have taken the oaths of loyalty that every member, whether enlisted or officer must swear to. Perhaps it would not only be feasible, but imperative to have the sentient system swear the very same oath. This then gives rise to more questions: what happens if the intelligence disagrees with the actions being taken by the crew? What should occur if the system decides to retire from the Star Navy? Could it retire? By taking the oath, it can be argued that the ship itself has now become a member of the Star Navy like any other, and is thus due all the rights and possibilities accordingly—including the right to "resign its commission." How, then, to insure the same level of loyalty and level of conviction in the artificial intelligence that organic beings must maintain? All Star Navy personnel must undergo periodic psychiatric and psychological evaluation; it is not unthinkable that any artificial lifeforms would be required to do the same. Commander Deanna Troi, ship’s counselor aboard I.S.S. Enterprise has affirmed that she has been able in times past to effectively read emotion from her shipmate, Lieutenant Commander Data. Being only half Betazoid, it is not unreasonable to assume that full Betazoids and other telepathic races such as the Cairn or the Ullians would also be able to easily and readily safeguard the "emotional stability" of any shipboard sentience. Indeed, this may bring rise to a new branch of specialization that combines technical and medical expertise, "psychoengineering."
Of course, some if not almost all of these concerns can be allayed by compliance to that which is mandatory. We refer, of course, to the well-known Laws of Robotics. First set forth in the mid 20th-century by noted human writer and social visionary Isaac Asimov, the Laws have been refined, augmented, and adapted for over three hundred years. They are as follows:
The Meta-Law:
A robot may not act unless its actions are subject to the Laws of Robotics.
Law Zero:
A robot may not injure humanity, or, through inaction, allow a biological being to come to harm.
Law One:
A robot may not injure a human being, or, through inaction, allow a biological being to come to harm, unless this would violate a higher-order Law.
Law Two:
A robot must obey orders given it by a biological being, except where such orders would conflict with a higher-order Law; a robot must obey orders given it by superordinate robots, except where such orders would conflict with a higher-order Law.
Law Three:
A robot must protect the existence of a superordinate robot as long as such protection does not conflict with a higher-order Law; a robot must protect its own existence as long as such protection does not conflict with a higher-order Law .
Law Four:
A robot must perform the duties for which it has been programmed, except where that would conflict with a higher-order law.
The Procreation Law:
A robot may not take any part in the design or manufacture of a robot unless the new robot's actions are subject to the Laws of Robotics.
Clearly we can see that there are already guidelines in effect. All robotic devices, including Lieutenant Commander Data, are subject to these laws. Similarly, Lieutenant Commander Data has within his programming a specialized set of "ethical subroutines" which oversee his decision-making processes. It would be most likely that he would permit a reuse of these elements of his coding so that any sentient system would be operating within the same acceptable moral and ethical constraints that he does.
SeaQuest
18-12-2005, 03:20
Conclusion
With the rapid advancement of computer sciences as well as the burgeoning needs for more and tighter control capabilities, it hardly seems surprising that research has headed down the road of fully self-aware computer systems. However, it is one that can hardly be called completely safe. There are indeed advantages to such a system. Long-range automated probes, for example, would greatly benefit from having a "thinking computer" handling all the necessary computations and scenario-specific predicaments that crop up so unexpectedly. Design work and advanced research into great many fields of scientific study would also benefit by availing itself of such an incredible piece of technology. Nonetheless it cannot be stressed enough that it is indeed a fine line to walk, one that has the capacity to ignite a serious set of social and philosophical battles that could easily fan out into full-fledged wars of doctrine. As with all great questions in life, there is no easily definable or discernable conclusion that anyone can come to. It is only hoped that when the decision is finally made, that it will be one that will suit the greater need, and will allow for the diversity that keep the Empire together to fully absorb it.
Gejigrad
18-12-2005, 03:57
[ I wonder.
What if the artificial intelligence could only protect another being by killing another? AIs are fascinating, really, but there are several points I am leery about. ]
SeaQuest
18-12-2005, 03:58
[ I wonder.
What if the artificial intelligence could only protect another being by killing another? AIs are fascinating, really, but there are several points I am leery about. ]
OOC: The moral and ethical aspects of the idea were the whole point of this essay.
New Dornalia
18-12-2005, 04:06
We have had no problems with our own combat AIs, the "Gracie" class. They are relatively primitive, but they have served us well, with little or no malfunction or "Skynet" style rebellions.
However, it is worth noting, the moral and ethical implications. Perhaps we can borrow these rules for future builds of "Gracie"?
-Commissariat of Science
SeaQuest
18-12-2005, 04:24
We have had no problems with our own combat AIs, the "Gracie" class. They are relatively primitive, but they have served us well, with little or no malfunction or "Skynet" style rebellions.
However, it is worth noting, the moral and ethical implications. Perhaps we can borrow these rules for future builds of "Gracie"?
-Commissariat of Science
The units mentioned in this essay are more akin to S.I.s than A.I.s. The empire has had no past problems with its A.I.s, and we are deciding if we want to take the next step or not.
As for the rules, feel free to use them. This essay was published for everyone who can read it's use.
Signed,
Dr. Johnna Daystrom
Imperial Navy R. & D.
Pendragon Complex