I
feel like the motivation behind building self-driving cars is a fearless drive
(ha!) into the utopia of technology always being perfect, with a thin veil of
oh-won’t-it-be-safer smudged on top. Yes, there are some legitimate arguments
for driverless cars being safer than fallible humans. But I’ve never really
felt like that was why anyone was
pursuing them. It seems like technology humans just think it’s the coolest
thing ever, the next big thing, the next step to making a Meet-the-Robinsons-esque
world. I know there are some safety precautions to these cars driving around
the regular world; the articles mentioned safety drivers as backups (sometimes
two, one for data and one for the road), and there’s city legislature and also
companies have to stand up to the backlash of the public for any little thing
that goes wrong, but it’s just always seemed to me that people are just screwing
around in this sphere. And maybe that’s a little unfair. But to me, this is the
issue where the argument of technology’s awesomeness and all-prevailing good
seems to be the driving force (last one I promise) behind this change, more
than a radical safety overhaul (it seems to be justifiable reason to point to).
The
argument for driverless cars is that people make mistakes. They’re slow, they
lose focus, they don’t always make the best decisions. Driverless cars would
ideally know more (more data input), be able to make close calls more
accurately and safely, and would stop the huge number of auto accidents that
kill people every year. Humans are not perfect drivers. Arguments against AV,
though, include the fact that computers (networks) can be slow, too. They can
focus on the wrong input, misjudge something a human would have no trouble
understanding (an article mentioned a little unicorn sticker making a stop sign
unrecognizable to one car), and they also don’t always make the best decisions.
There’s a lot of data to go through, a lot of inferences to make, and sometimes
a car isn’t even getting good data (snow, ice, darkness, poor road paint, etc).
I don’t think anyone knows if they will actually make our roads safer. From the
arguments I’ve read, it seems like (1) it will most certainly get a lot worse
before it gets better, ie more deaths will occur before AVs start causing more
good than harm and (2) that even ardent skeptics do kind of still hold that
maybe we could still get to the point where the roads would be safer because of
AVs. I think we all still do have a little bit of that hope. There’s just a lot
of scary between the here and now and that place.
How do
programmers deal with moral challenge? First of all, they have to be aware of
it. I think they probably are now, for the most part. If they weren’t at the
very start, the deaths that have happened since have ensured that. I feel like
eventually the cars might just have to gravitate toward this utilitarian
what-kill-fewer-people decision mode, which isn’t going to make the cars popular.
But cities won’t let (goodness, hopefully) cars on the road that could just
arbitrarily kill a bunch of people just to protect the passenger at all costs,
and no one’s going to want to ride in a car that will decide to kill them if given
the right scenario. I don’t know if it can, but an artificial intelligence
should approach life-and-death situations with some weight, if that makes
sense. There should be more guidelines in place than what it’s just learned
through scenario training. Someone should have to sit down and write an
if-this-then-that guidebook. There has to be rules. Right? Shouldn’t it come to
down to a human, at some point? We can’t leave our morality up to a machine. Not
even for just one little erosive choice at a time. That has to be when we lose ourselves. I don’t
know if it means putting together a code of programmers who decide what they
will and will not ask cars to do, if it means throwing this to legislature
(sigh), or the public, or priests or philosophers or physicians or psychics. I
don’t know. How do we decide as a society what moral choices we want running
around our streets? Because right now we do it person-to-person, every time someone
gets behind the wheel. But AVs are going to have to be united front of the same
choices, at some point, in order for this communication and safety advance to
work. We’re going to have to decide how we want these choices framed, who we’re
going to choose to decide them, and whose fault it is when bad things happen. I
don’t really want things traced back all the way down to the programmer who
wrote X line of code that may have caused this crash, but also as that
programmer I would feel like I actually caused the death of that human being.
The company has the most money, so like that’s kind of where it makes sense to
rest the blame. I think blaming chip designers is a little silly, but maybe
that’s just me. A little too far down the food chain for my taste.
The
government regulates driving currently. It maintains roads, and decides who is
allowed to drive and not. It seems to make sense that they regulate this new
sphere as it becomes available. But, as always, they should really look to
experts to make informed legislature about this.
I do
not want a self-driving car, if it wasn’t clear already. They scare me and seem
reckless right now. I don’t want to trust a rando in SoCal with my life, and
have no say in the moral choices that decide my fate in a crash. But also, it’s
not like I don’t trust my life to any random stranger I let behind in the wheel
in an Uber, too.
No comments:
Post a Comment