Emerging Technology (Non-Existent) Ethics

Emerging Technology (Non-Existent) Ethics

Emerging Technology (Non-Existent) Ethics 150 150 Clark

The speed at which our technology develops has outpaced the speed with which we can comprehend the social, political, and ethical implications of our technology

The Problem

The lack of awareness, discussion, legislation, judicial precedent, and regulation – of social, political, and ethical issues that arise through the development of emerging technologies – is arguably one of the largest problems of the near future. Be very scared.

My method in this post will start with a motivation of the problem – examples of the problem. These examples will be followed with brief discussions of the top issue areas as they currently stand – autonomous robotics, human enhancement, cyberwarfare, lack of global standards and regulation, and finally, our general epistemic ignorance regarding the long-term future of all technologies. I will conclude this post with overtly optimistic possible solutions, as well as a final word of warning.


(Semi) Autonomous Robots

Our emerging technologies are using robotics everywhere – from cell phones to health-care to the ‘soldiers’ in our armies. Not only is this ‘trend’ growing at an exponential rate, but this ‘trend’ changing in kind – more and more of the robots being used are considered ‘autonomous’ or ‘semi-autonomous.’

For our purposes, and to avoid getting stuck down semantic rabbit-holes, you can consider an ‘autonomous robot’ to be a robot that has a lot of ‘choices’ (whether or not those choices could be considered full-stop ‘agential decisions’ is irrelevant to this discussion). Basically, ‘autonomous’ robots are robots that get to act like humans by choosing actions based on past learning.

The more complex a system becomes, the more likely that emergent properties of a stochastic nature will arise from that system. The robots that we are building are complex cognitive entities that can ‘learn’ from ‘experience’ – meaning that robots will manifest properties and behaviors that we couldn’t predict from looking at the parts that compose the robots. This problem just becomes compounded when you recognize the permutation-growth caused by the interaction between complex systems. That is, if we can’t predict the behavior of a single complex system like a ‘self-driving’ car, than it is highly unlikely that we will be able to predict the behavior of the complex interactions between millions of self-driving cars on the road.

This discussion isn’t meant to be fear-mongering of the iRobot variety. These problems are already live problems.

Remember the robot cannon in South Africa in 2007? 9 dead and 14 wounded because a robot cannon “opened fire uncontrollably, killing and injuring soldiers” (SA National Defence Force Spokesman Brigadier Gen. Kwena Mangope). This robot cannon was programmed to ‘choose’ its targets semi-autonomously … This robot gives targeting data directly to the fire control unit of two 35mm cannons, also reloading on its own when empty. “By the time the gun had emptied its twin 250-round auto-loader magazines, nine soldiers were dead and 14 injured” (Source).

This issue is real. Here. Now. In case you didn’t hear, the United States Congress has decreed that 1/3 of all U.S. military land vehicles and 2/3 of all U.S. military aircraft are to be replaced with autonomous robots by 2015.

I could continue to list the crazy, wild-wild-west realities of emerging robotics, but then this article would get out of hand. So, instead, I will point you to the person who got me interested in this type of research, Dr. Patrick Lin. When I was an undergraduate student at Cal Poly, I had the fortune of working for Dr. Lin as the head research assistant (read sole research assistant) on a project funded by the Office of Naval Research and NSF to study the social, political, and ethical implication of using robots in warfare. In the process, I got to meet the grandfathers of robotics like Dr. George Bekey, and listen to them expound on the potentially crazy futures that are in store for a world full of autonomous robots.

Human Enhancement

Time to get really science-fictiony – human enhancement is a trip. We are not that far off from pretty ridiculous seeming technology for making ‘enhanced humans.’ My favorites, from peeking behind the veil, include the ‘wifi brain’ and the ‘life-extender’ (as I am dubbing them).

Imagine a human mind that can access (read/write) the internet via conscious and representative thought – that is the idea behind the ‘wifi brain.’ Your boss asks you for the latest population numbers on the south Indonesian Android smartphone user market, and you just search Google and get the answer within your own head, answering your boss with the correct figures within seconds as though you had known the answer to begin with. Ugh … shit that’s cool.

Now, imagine that your heart and lungs could stop functioning for over an hour without damaging your brain. You drown but someone gets you to a hospital within an hour after your heart stopped – the doctors resuscitate you back to life, with no brain damage. All because of the ‘life-extender,’ a little ‘chip’ implanted in your brain that contains an artificial red blood cell and replicable oxygenation mechanism … Mind. Blown.

This is happening NOW. Not just awesomely, though … there are problems that arise from this cool tech. Many of these problems stem from our semantic impotency.

We can’t make a clear distinction between ‘therapy’ and ‘enhancement.’ This is only a problem because our whole healthcare system is tightly coupled to the concepts of ‘therapy’ and ‘enhancement.’ For example, access to therapeutic treatment tends to be a natural right for U.S. citizens (regardless of economic status), while access to enhancement-based treatments are not a given right. This makes sense to us, intuitively – we feel better subsidizing Polio vaccinations than cosmetic breast augmentation surgeries.

Medical treatment, insurance coverage, human rights, patient rights, and more are forensically and legally tied to an artificial and non-principled distinction between the concepts of ‘therapy’ and ‘enhancement.’ How we treat people depends on our perception of whether that person is receiving therapy or enhancement, so we sure as hell better have a principled and ‘equitable’ distinction between therapy and enhancement.

This might not seem like much of an issue, however there are an infinite amount of plausible scenarios that our current concepts and systems cannot handle as we increase our ability to ‘enhance’ humans with technology. Insurance will pay for new prosthetic legs for an accident-amputee. But, will insurance pay for new prosthetic legs for the olympian who wants to be able to run faster?

What about contact lens ‘enhancements?’ Will contact lenses that allow for night-vision capabilities be covered the same as contact lenses that allow for ‘clear daytime’ sight (normal contact lenses)?

Not only do emerging enhancement technologies create legal and political nightmares in regards to the ‘therapy/enhancement’ distinction highlighted above, but they exacerbate the widening of the already considerable socio-economic gap. There isn’t an easy solution to bridge the future gap between the enhancements the rich and privileged can afford and the enhancements the poor and underprivileged can afford. This also creates an exponentially-growing gap on the generational level – how does my daughter compete with your daughter, if my daughter can’t run as fast because I couldn’t buy her the ‘extra-fast enhancement package?’ Maybe my dad couldn’t provide the ‘extra-smart enhancement package’ the generation prior, which would have aided my asset generation.  Maybe your father could afford your intelligence enhancement, enabling your wealth, providing your daughter with further enhancements.

Cyberwarfare

Have you ever seen the movie Live Free or Die Hard? As ridiculous and far from realistic as that movie is, the general premise is spot on. Because the world’s infrastructure – especially in more ‘developed’ regions – relies directly upon technology like the internet, cyberterrorism is not just possible, but probable … and scary.

Compared to the resources required to fight a conventional war, the resources needed to fight a cyberwar are nothing – a few hackers, computers, maybe some inside information, and internet access. This reality effectively destroys the barrier of entry to war. Ceteris paribus, lower barriers of entry to war will lead to more war. No longer do you need to be a ‘big country’ to fight another ‘big country’ … Global accountability nightmares abound.

The World has no Standards

or: “Smokey, this is not ‘Nam … There are rules”

This section will be relatively short and unsweet. There is an obvious lack of global technology standards, global technology regulation, and global enforcement of technology standards. There is no consensus on what technology should be allowed, how that technology should be regulated, and how the enforcement of technology standards should proceed. If you need examples of this, open a newspaper. If you need it spoonfed, here are two: see global Intellectual Property breaches related to technology and the recent Facebook ‘mass-emotion contagion’ study.

The world of technology is a shit show – like college-Spring-Break-Cancun-style. Unclear standards. Unclear how regulation works. Minimal enforcement of unclear standards. If bad things caused by lack of a global technology roadmap is analagous to underage drinking, than the future looks to be the youngest, sloppiest, drunkest, wildest, and stupidest of Spring Breaks.

We are Stupid

or: We can’t Predict the Future

I have a feeling that a contributing factor – to the above lack of standards, regulation, and attention – is our impotence in the face of epistemic uncertainty. We have failed before when trying to regulate technology, and we failed spectacularly. So, now we are scared to try at all.

We have done things like enforce standards that are actually bad because of our limited epistemic position. We have enforced the wrong things, while missing important areas that should have been regulated – see pesticide use like DDT. We have focused on short-term regulations while ignoring the long-term because the short-term is so much easier to predict. Now that we have seen that our short-term predictions can be ill-fated in the long-term, we seem to want to make no predictions.

General Approach to a Solution:

  1. get head out of sand (read butt)
  2. recognize and control fear of failure
  3. leverage expertise and numbers (the general population)
  4. think long-term, regulate short-term
  5. starting now is better than doing nothing

Possible Concrete Action-Steps

  • Create some regulatory boards
  • Leverage user-generated transparency
  • Take a stab at writing some legislation
  • Get some ‘sensible’ media attention
  • Stimulate some open, public discourse around the issues

BURN ALL TECHNOLOGY !!! AHH THE LUDDITES ARE COMING! HIDE THE SCIENTISTS!!