Roger Chanels
  • Home
  • Enantiomer
    • Author Interview
    • Kyra's Cookies
    • Blue Meth
    • For Real? (Spoiler!)
  • End Play
    • Fake News
    • Easter Egg Hunt
  • Best Played Hands
    • How Adu Got His Name
    • ChatGPT Review of Best Played Hands
    • Conversation with Andrei
    • BCI's
    • AI
    • Sonnet Showdown
  • About Roger
  • Contact

best played hands:
​Brain-computer interfaces

Order Best Played Hands
It’s all coming.

Reality is quickly catching up to some of the key plot elements of Best Played Hands. After a two-year battle and despite an ongoing investigation into animal welfare violations, the FDA approved human trials for Neuralink brain implants on May 25, 2023. Concerns included safety of the robot-performed surgical implantation and removal; side effects such as seizures, headaches, mood changes, or cognitive impairment; overheating of the lithium battery implanted in the skin behind the ear; wires migrating; and hackers accessing brain wave data. But they were all addressed. Notably, no agencies—and few people—seem to be stepping in to ask questions like “Is this really a direction we want to go?” Like many technological advances, this one has progressed with absolute focus on how and relatively little consideration of whether.

The initial applications of Neuralink technology are designed to give amputees better control of prosthetic limbs that reflect natural motor skills; next in line are those with Parkinson’s disease, epilepsy, or spinal cord injuries. But Neuralink doesn’t plan to stop there. According to Reuters, “Musk envisions brain implants curing a range of conditions including obesity, autism, depression and schizophrenia, while also enabling Web browsing and telepathy.” If I were missing a limb and someone offered to turn me into a bionic man, I don’t think I could possibly say no. But depression? We’re really going to start sticking wires into our brains to treat mental illnesses? You know who was depressed? Lincoln. Churchill. Dickens, Poe, and Tolstoy. Woolf and Plath. Freud. Munch, van Gogh, and Picasso. Kierkegaard, Heidegger, and Nietzsche. Beethoven. Tesla had obsessive compulsive disorder; Michelangelo was autistic; and Newton was bipolar, autistic, and schizophrenic. Imagine our world today if they all would have been “fixed” by hardwiring computers to their brains. Is it possible the human race is better off without a computer regulating our dopamine levels?

Yet, addressing physical and mental deficiencies is still not the endgame for Musk. He thinks it’s our only hope to survive. “We’re already cyborgs,” Musk told Maureen Dowd at Vanity Fair. “Your phone and your computer are extensions of you, but the interface is through finger movements or speech, which are very slow.” He thinks we can’t beat ‘em, so we have to join ‘em. "I don't love the idea of being a housecat, but what's the solution? I think one of the solutions that seems maybe the best is to add an AI layer.”

I’d rather be a housecat. I’ll move down a rung on the IQ ladder if it means I get to keep owning my own thoughts. But I’m not convinced (at least not yet) that we have to move down, or that we should move down. I don't know that post-singularity artificial intelligence supra-geniuses will ever outperform humans in every aspect, just maybe every measurable aspect. We know about neurons and synapses and neurotransmitters and such, but we haven't reduced everything to pre-determined outcomes. We don't know the neurophysics of invention, we haven't mastered the algorithms of instinct and morality, we can't predict every word that a human will say--or write.

Rutabaga.

What I'm saying is, the "singularity" event on the horizon--the point in time that artificial intelligence exceeds human intelligence, may be based on a flawed metric. We don't really know and understand every aspect of brain function, decision making, or intelligence, so how are we so certain that we're about to be surpassed? Why would we panic and surrender so easily?

Though I might be the only one, I’m also concerned that silico mindmelds will corrupt scientific advancement. AI might make good science happen a lot faster, yet, contrary to popular opinion, I think truly groundbreaking stuff will slow, or even cease. We’ll stay stuck in our current paradigms. We’ll optimize the hell out of them, but we will only end up with a localized model.

​What is too often forgotten about science is that we don’t actually know very much. For instance, the universe. This is what NASA has to say about it: “We know how much dark energy there is because we know how it affects the universe's expansion. Other than that, it is a complete mystery. But it is an important mystery. It turns out that roughly 68% of the universe is dark energy. Dark matter makes up about 27%. The rest - everything on Earth, everything ever observed with all of our instruments, all normal matter - adds up to less than 5% of the universe.” So far, science has produced for us a pretty good model for the behavior of about five percent of our world. I don't think a sub-product of this limited understanding, namely AI, will help us see beyond its borders. I guess we’ll find out soon.
​
It's all coming. 
Proudly powered by Weebly
  • Home
  • Enantiomer
    • Author Interview
    • Kyra's Cookies
    • Blue Meth
    • For Real? (Spoiler!)
  • End Play
    • Fake News
    • Easter Egg Hunt
  • Best Played Hands
    • How Adu Got His Name
    • ChatGPT Review of Best Played Hands
    • Conversation with Andrei
    • BCI's
    • AI
    • Sonnet Showdown
  • About Roger
  • Contact