Tuesday, December 4, 2018

Reading 14


            I’m just not sure whether or not coding is the new literacy. It’s true, like the article said, that right now its power is concentrated in a few hands, like old-timey scribes, and it does seem logical and democratic that it would and should eventually filter down into everyone, so it’s no longer this mystical power and instead it’s just like reading and everyone’s expected to do it. The devices that are controlled by code are so ubiquitous, and they affect everyone’s lives so radically that it does make sense for everyone to have at least a general idea about what’s going on. There is something to be said about the comment where not every needs to code like not everyone needs to learn how to plumb… it’s true that it is a labour-intensive, esoteric skill when you can do it well and be paid to do it. But also, everyone should be able to fix a clogged toilet. Everyone owns a plunger and handles the small plumbing details on their own. I liked the line about how not ever cook is some amazing chef, not everyone who can write is Jane Austen, but knowing how to scramble an egg or write an email makes your life easier. Especially since there are a lot of important ethical decisions about all these realms of computing taking over our lives that are coming due now and that will continue to arrive the next few years, I think it is important that people have some background in what coding is and how it really works. And why we do it at all. That might be the most important. I do think the emphasis should be on the computational thinking aspect, though, like the first article mentioned. Who on earth needs Java syntax. Who teaches language by sitting three year olds down and explaining complex punctuation structure. Teach people the thought behind how all this works, and what would be most useful for them to know, the ideas and the concepts and some basic language skills, the same way you teach relevant words to a child, like “mommy” and “food” and “please”.
            The way it’s currently set up, I’d say that no, not everyone should be required to take a coding class. In some new world, or if everyone had access to the 12-week school for teachers that they were talking about where they spend a lot of time on concepts and “why”, I think that would be useful. I just feel like it’s weird and kinda wrong to have the vast majority of society using these devices when how they work is a black box. Maybe it is like everybody using an old-timey census, putting X next to some line that they don’t understand, and it needs to change.
            The arguments for introducing everyone is that code is everyone, software is eating the world, and that it’s the new literacy, everyone should get on board. The big argument against that I’ve always heard is that they’re not going to need this in their lives ever. My parents still complain about the coding class they had to take in high school—they did the turtle game that the Vietnamese children were playing! They always talk about how it was so useless, making the turtle spin and how they were like I will never have to use this knowledge. I just think if it’s going to be effective it has to be so much more about the computational thinking aspect than just about making a turtle go because the teacher said to. Something along the lines of “just so you know what’s going on in the world, so you can make more well-informed technology decisions and maybe sort of see how these concepts apply to all the apps and tech you use every day, let’s crash course you through how programmers solve problems, what their biggest tools are, how they think and solve and communicate. Because these problem-solving skills aren’t inapplicable to your life as a doctor, lawyer, housewife”.
            Schools are going to face huge challenges. It’ll be expensive. They’ll have to hire or train CS teachers, and they can’t keep CS teachers—if they’re good, they return to industry more likely than not within a few years. Also, schools already have curriculums, and they’re going to have to decide what takes priority and what to cut, which I don’t think anyone mentioned. Is CS more important than geography? What do we squeeze out for this class? I think in some young grade a computational thinking course, exposing kids to computers and how they think and the problem-solving steps that go along with that, should be a requirement. I want it to be on top of existing ones, but I don’t know how feasible that is. If anyone’s still doing script in the second grade, maybe it replaces that. I’m biased and not really a cursive person, though, so maybe not. After that one class, it could start being an elective or a highly-encouraged one. And then make another required class in high school. Just gives kids something where they have to sit down and learn a couple things about this ubiquitous tech. Logic might also be a good component to have in that class. In high school some legit programming might not be a bad idea, but please don’t hammer syntax and keep focusing on ideas and problem solving.
            Anyone can learn to program. Anyone can learn to read. I think some people have more of knack for it (some people learn to read earlier), but if you hammered it home the way we hit children over the head with reading, everyone could get it. I don’t think everyone needs to be able to build web-based client-server apps with relational-database components, but I think people should know enough turtle-spinning to get loops, enough logic to get Boolean, enough math to get binary. I don’t want anyone to be lost in this increasingly tech society.

Monday, November 26, 2018

Reading 13


            From the reading about reverse engineering, it seems like the DMCA isn’t a huge fan of it. A lot of the FAQ article focused on things that were “legally risky”, a lot of which seemed to sum up reverse engineering process. The comment about how you could most likely go ahead and do it if nobody explicitly said you couldn’t (“For example, if a license agreement authorizes you to “use” the software, and it does not expressly prohibit reverse engineering, that may be all the permission you need.”), which to me seems like the DMCA does disapprove and reverse engineers can just sort of get away with it. Plus, the case studies in that article are downright messes; it seems like everyone this week was subject to massive appeals and re-trying and nobody being able to agree. Copyright users aren’t allowed to copy an original author’s content (ie, putting a movie on youtube), and companies are allowed to put DRM on their material to ensure that nobody tries to copy it or use it for purposes they didn’t intend.
            On the surface level, it does seem right for companies to be able to protect their content from being ripped off by placing some sort of protection on it. That’s a thing that people do all the time, they put locks on things that are theirs to keep people out and to protect what’s inside. The DVD companies definitely feel justified because the locks prohibit a lot of basic piracy from happening, so that’s a decently good use case. It doesn’t stop it entirely, but piracy isn’t awesome because it deprives the justified creator from their financial benefit for creating, and a lock on DVDs is a relatively easy way to curb some of that. That one I do understand. In that one, it’s more just convenience that causes people to want to take content off of their own DVDs, so I think it’s probably justified having DRM on there and a little wrong to take it in that case. It’s hard, though, because people that are doing that could just be moving the DVD content they’ve legally bought to their one desktop at home so they don’t have to worry about losing the DVD because their dog always eats them. It’s hard to say that people are wrong for using the content they’ve purchased in ways they want to use it. I feel like “supportably illegal” is more where I’m at than calling them “definitely immoral”. It’s kind of grey, there. They probably shouldn’t do it, the law says they shouldn’t, but they did spend their money to support the artist, and now they’re just trying to use their legal property in the way they see fit. The DMCA doesn’t decide where litter box purchasers are allowed to keep their litter boxes, after all. In a lot of products, once it’s yours, it’s yours. But it makes sense why they implement these specific locks, so I think I do have to stand behind them here. For digital media, to protect the hard-earned monetary reward that the content creator should be able to receive for their good and purchasable contribution to society.
            When it comes to the car/tractor thing, though, I think we’ve gotten a little bit ridiculous. The Wired article about John Deere was very fired up about losing the idea of real ownership, and I can’t say that I really blame them. A tractor really obviously isn’t software, and I don’t think it falls under copyright. There were plenty of other ways its intellectual property can have the force of law behind it (trademarks of the John Deere name, industrial design, patents on the special tractor parts that make it good), but copyright on its software to the point that software needs to be protected beyond someone being able to diagnose and fix it on their own (hello, American dream? This is tractors we’re talking about. This is the group of the population who can and will fix it by themselves because they’re old-school, resourceful, and they can) seems a little ridiculous to me. You bought a tractor, you bought the car, you should be able to tinker around with what’s under the hood that might be broken or that you disagree with. The article mentioned that people could alter its emissions by manipulating the software—that’s now an illegal emissions issue. Copyright shouldn’t extend its arm that far into protecting different areas of law. Its job is content, not material goods with some software behind them. I think diagnosis should be available, there. Let the owner have a crack at what’s under the hood—they paid for it, and if it’s faulty or they’re unhappy they should have the chance to remedy it. If they didn’t like the way the seat sat in 1960, they were allowed to take a screwdriver to it and improve the design. Why can’t they do that today?
            To me, it seems like legitimate researchers should be able to reverse engineer and poke software looking for knowledge or for bugs/security problems. I don’t really want to open the floor to developers though; it seems like that would start a lot of people just trying to rip off other people’s code and steal their business out from underneath them. Don’t forget, software is eating the world.
            It’s challenging, because publishers don’t have end-control over books. Books are also much less difficult to copy than streams of bits in movies. Physical products, though, I think should belong belong (not fake belong) to the person who purchased them, which access to at least an API of the proprietary software in it, if not more. Do I own a DVD, or am I just leasing it for the life of that silver shell? I’m not sure, but those locks don’t bother me as much. I would feel better, though, if people could truly own the cars they buy. Even though I get software is business advantage, anymore.

Monday, November 19, 2018

Reading 12


            I feel like the motivation behind building self-driving cars is a fearless drive (ha!) into the utopia of technology always being perfect, with a thin veil of oh-won’t-it-be-safer smudged on top. Yes, there are some legitimate arguments for driverless cars being safer than fallible humans. But I’ve never really felt like that was why anyone was pursuing them. It seems like technology humans just think it’s the coolest thing ever, the next big thing, the next step to making a Meet-the-Robinsons-esque world. I know there are some safety precautions to these cars driving around the regular world; the articles mentioned safety drivers as backups (sometimes two, one for data and one for the road), and there’s city legislature and also companies have to stand up to the backlash of the public for any little thing that goes wrong, but it’s just always seemed to me that people are just screwing around in this sphere. And maybe that’s a little unfair. But to me, this is the issue where the argument of technology’s awesomeness and all-prevailing good seems to be the driving force (last one I promise) behind this change, more than a radical safety overhaul (it seems to be justifiable reason to point to).
The argument for driverless cars is that people make mistakes. They’re slow, they lose focus, they don’t always make the best decisions. Driverless cars would ideally know more (more data input), be able to make close calls more accurately and safely, and would stop the huge number of auto accidents that kill people every year. Humans are not perfect drivers. Arguments against AV, though, include the fact that computers (networks) can be slow, too. They can focus on the wrong input, misjudge something a human would have no trouble understanding (an article mentioned a little unicorn sticker making a stop sign unrecognizable to one car), and they also don’t always make the best decisions. There’s a lot of data to go through, a lot of inferences to make, and sometimes a car isn’t even getting good data (snow, ice, darkness, poor road paint, etc). I don’t think anyone knows if they will actually make our roads safer. From the arguments I’ve read, it seems like (1) it will most certainly get a lot worse before it gets better, ie more deaths will occur before AVs start causing more good than harm and (2) that even ardent skeptics do kind of still hold that maybe we could still get to the point where the roads would be safer because of AVs. I think we all still do have a little bit of that hope. There’s just a lot of scary between the here and now and that place.
How do programmers deal with moral challenge? First of all, they have to be aware of it. I think they probably are now, for the most part. If they weren’t at the very start, the deaths that have happened since have ensured that. I feel like eventually the cars might just have to gravitate toward this utilitarian what-kill-fewer-people decision mode, which isn’t going to make the cars popular. But cities won’t let (goodness, hopefully) cars on the road that could just arbitrarily kill a bunch of people just to protect the passenger at all costs, and no one’s going to want to ride in a car that will decide to kill them if given the right scenario. I don’t know if it can, but an artificial intelligence should approach life-and-death situations with some weight, if that makes sense. There should be more guidelines in place than what it’s just learned through scenario training. Someone should have to sit down and write an if-this-then-that guidebook. There has to be rules. Right? Shouldn’t it come to down to a human, at some point? We can’t leave our morality up to a machine. Not even for just one little erosive choice at a time.  That has to be when we lose ourselves. I don’t know if it means putting together a code of programmers who decide what they will and will not ask cars to do, if it means throwing this to legislature (sigh), or the public, or priests or philosophers or physicians or psychics. I don’t know. How do we decide as a society what moral choices we want running around our streets? Because right now we do it person-to-person, every time someone gets behind the wheel. But AVs are going to have to be united front of the same choices, at some point, in order for this communication and safety advance to work. We’re going to have to decide how we want these choices framed, who we’re going to choose to decide them, and whose fault it is when bad things happen. I don’t really want things traced back all the way down to the programmer who wrote X line of code that may have caused this crash, but also as that programmer I would feel like I actually caused the death of that human being. The company has the most money, so like that’s kind of where it makes sense to rest the blame. I think blaming chip designers is a little silly, but maybe that’s just me. A little too far down the food chain for my taste.
The government regulates driving currently. It maintains roads, and decides who is allowed to drive and not. It seems to make sense that they regulate this new sphere as it becomes available. But, as always, they should really look to experts to make informed legislature about this.
I do not want a self-driving car, if it wasn’t clear already. They scare me and seem reckless right now. I don’t want to trust a rando in SoCal with my life, and have no say in the moral choices that decide my fate in a crash. But also, it’s not like I don’t trust my life to any random stranger I let behind in the wheel in an Uber, too.