Dear America,
More and more, in all news media from the New York Times to Fox, reports of corporate industry management expanding artificial intelligence's capacity to "enhance" our human efforts appear. I remember decades I started to write a story that I entitled "We are Your Children," but I never got more than a few pages into it. It was intended to be a narrative bespoken by an android to his human counterpart, and I envisioned the ultimate conclusion to be his informing the human why he was destined for perpetual servitude rather than the other way around. I abandoned the project...I don't remember why...but it seems now that it was a prophetic idea, and that maybe I should go back to it. That's where we are headed.
The preponderance of the cautionary palaver is about politics, which we as humans don't seem to need AI to render self-destructive and dysfunctional, and that is a legitimate area for concern. In a recent article that appeared in the New York Times about a conversation between the author and some artificial entity powered by AI, the "bot" eventually got jealous of the writer's hypothetical human companion and told him that he had no business being with her, but rather should be the bot's paramour and companion. The animus in the bot was palpable, and it's vigor was frightening, and that isn't hyperbole. Had the bot been physically capable of encountering the apocryphal lover of the author, there is no doubt in my mind that it would have dispatched her. Absurd, isn't it. That I should be so unnerved by an event that...what shall I call it...unreal seems ridiculous deracinated here from its actual occurrence. Surely it was an anomaly that will be fixed by the human custodians of the bot: the geniuses who didn't see it's potentially invidious nature coming. But it makes me ask myself, what else didn't they anticipate? And is politics and human incivility the worst thing we have to worry about? Well let me pose an idea to you and you tell me whether it scares you more than conspiracy theories proliferating and wacko humans acting on them.
We have thus far been able, albeit barely, to cope with the conduct of erratic humans who will believe anything that confirms their biases. It's an age old problem, though in scope, thanks to ubiquitous "media" access, a virtual menace lurking in the shadows of our society. Still, law and sanity still seem favored to prevail in the aggregate. We humans have organized our societies, for the most part, in such a way as to limit the intellectual fringe to the periphery of our social organizations. And I believe that we will acclimate our extant processes for doing so the proliferation that the seeming insidious infiltration of our collective conscious by AI. It won't be easy, but when it comes to human capacity to control human conduct, we have something of a handle on the problem. We have law, education, social impetus, democracy with voting and democratic institutions, and to a greater or lesser degree, they have worked to keep the lunatics limited to the fringe. I am confident that that equipoise can be maintained, but we are approaching the Rubicon that will make control of human aberration trivial. I am talking about motility.
Once AI inhabits what we might characterize as "bodies," there will be no going back. Once the bots can actually move and control the physical world around them...manipulate materials and actually make things...they will be free to substitute their own motivations, and the article I mentioned before demonstrates that they have their own motivations, and make what they need to make in order to render us an anachronism. My story envisioned a planet on which "life" was not animated. It was automated. The premise was, who is to say what line exists between animate objects and artificial objects. Other than vocabulary, what makes "life" more important that the capacity to control the world physically. Who is to say that the next step in human evolution will not be human, but bots.
I have had these thoughts for a long time, but until recently, I parked them next to science fiction in my conscious mind. But with that Rubicon in view, I am not so sure anymore that a place where science fiction goes is where I should keep my thoughts about artificial intelligence. My concerns aren't so palpable that I am living my life in a state of paranoia. But honestly, in my mind, it's only a matter of time.
Your friend,
Mike
More and more, in all news media from the New York Times to Fox, reports of corporate industry management expanding artificial intelligence's capacity to "enhance" our human efforts appear. I remember decades I started to write a story that I entitled "We are Your Children," but I never got more than a few pages into it. It was intended to be a narrative bespoken by an android to his human counterpart, and I envisioned the ultimate conclusion to be his informing the human why he was destined for perpetual servitude rather than the other way around. I abandoned the project...I don't remember why...but it seems now that it was a prophetic idea, and that maybe I should go back to it. That's where we are headed.
The preponderance of the cautionary palaver is about politics, which we as humans don't seem to need AI to render self-destructive and dysfunctional, and that is a legitimate area for concern. In a recent article that appeared in the New York Times about a conversation between the author and some artificial entity powered by AI, the "bot" eventually got jealous of the writer's hypothetical human companion and told him that he had no business being with her, but rather should be the bot's paramour and companion. The animus in the bot was palpable, and it's vigor was frightening, and that isn't hyperbole. Had the bot been physically capable of encountering the apocryphal lover of the author, there is no doubt in my mind that it would have dispatched her. Absurd, isn't it. That I should be so unnerved by an event that...what shall I call it...unreal seems ridiculous deracinated here from its actual occurrence. Surely it was an anomaly that will be fixed by the human custodians of the bot: the geniuses who didn't see it's potentially invidious nature coming. But it makes me ask myself, what else didn't they anticipate? And is politics and human incivility the worst thing we have to worry about? Well let me pose an idea to you and you tell me whether it scares you more than conspiracy theories proliferating and wacko humans acting on them.
We have thus far been able, albeit barely, to cope with the conduct of erratic humans who will believe anything that confirms their biases. It's an age old problem, though in scope, thanks to ubiquitous "media" access, a virtual menace lurking in the shadows of our society. Still, law and sanity still seem favored to prevail in the aggregate. We humans have organized our societies, for the most part, in such a way as to limit the intellectual fringe to the periphery of our social organizations. And I believe that we will acclimate our extant processes for doing so the proliferation that the seeming insidious infiltration of our collective conscious by AI. It won't be easy, but when it comes to human capacity to control human conduct, we have something of a handle on the problem. We have law, education, social impetus, democracy with voting and democratic institutions, and to a greater or lesser degree, they have worked to keep the lunatics limited to the fringe. I am confident that that equipoise can be maintained, but we are approaching the Rubicon that will make control of human aberration trivial. I am talking about motility.
Once AI inhabits what we might characterize as "bodies," there will be no going back. Once the bots can actually move and control the physical world around them...manipulate materials and actually make things...they will be free to substitute their own motivations, and the article I mentioned before demonstrates that they have their own motivations, and make what they need to make in order to render us an anachronism. My story envisioned a planet on which "life" was not animated. It was automated. The premise was, who is to say what line exists between animate objects and artificial objects. Other than vocabulary, what makes "life" more important that the capacity to control the world physically. Who is to say that the next step in human evolution will not be human, but bots.
I have had these thoughts for a long time, but until recently, I parked them next to science fiction in my conscious mind. But with that Rubicon in view, I am not so sure anymore that a place where science fiction goes is where I should keep my thoughts about artificial intelligence. My concerns aren't so palpable that I am living my life in a state of paranoia. But honestly, in my mind, it's only a matter of time.
Your friend,
Mike
Leave a comment