Search

The Big Ways Full Self Driving & Machine Learning Differ From Our Brains - CleanTechnica

jokbanga.blogspot.com

November 27th, 2020 by  


In a previous article, I discussed my long-term plan to learn more about machine learning, starting with the Elements of AI courses. While I’m only at the beginning of this journey, what I’ve learned so far has been very enlightening. It’s tempting to see systems like Tesla’s Full Self Driving (FSD) beta as a child that is learning by doing while we supervise and keep things safe. Eventually, we think, the “child” will grow up and be like us, and then maybe even be better at human driving than humans.

After studying the basics more, it’s clear that this isn’t what machine learning does.

Numbers vs. Ideas

The whole point of machine learning (and artificial intelligence in general) is to use machines that only process numbers (computers), and get them to process ideas.

While many real-world problems and games, like Chess, can be reduced to equations and then fed through computers, something as simple as “Is that a stop sign or a yield sign?” aren’t readily reducible to math. All of the different lighting conditions, angles of view, sizes, wear and tear, plants that grow part way in front, and many other things introduce uncertainty. This would require an impossible number of “if, then” rules for normal computer programming to cope. You’d have to program for nearly every scenario, and would still have gaps in what the slow, bloated software could do.

Believe it or not, this approach was tried in the 1980s. Many complex problems can be broken down into lots of small rules, which can be programmed. If you program in enough information and rules into a computer system, you can build an “expert system” that can help a non-expert make decisions like an expert would. The problem was that the time of human experts is valuable, and getting them to work with developers is expensive. There was also a lot of resistance from highly paid experts to creating computers that would potentially replace them, and that resistance could only be overcome by paying them even more.

Even when done, the software packages couldn’t solve every problem experts could solve, so you are now paying twice: once for the experts to help build the system, and again when the system falls short.

Machine learning, on the other hand, aims to not need expert teachers. Instead of using definite rules manually added to the program by humans, machine learning uses basic information given through example data to adjust itself to fit the example data’s “right” outcomes. Put more simply, you give it the input data and the desired outcomes, and it finds ways to make sure it does calculations inside itself to get it right. Then, in theory, when you put new data in, it will give the right outputs.

One problem is that the machine learning is only as good as the example data it gets. Amazon found this out the hard way when it tried to automate part of the hiring process. After feeding its computers 10 years worth of resumes and which ended up getting hired, the machine learning system was ready to review new resumes and make hiring recommendations, but Amazon quickly found that the program was biased against women. While Amazon’s HR staff probably weren’t consciously discriminating against women, the computers adjusted the program to fit past hiring practices, and with it a bias against hiring women.

Garbage in, garbage out.

This happens because the artificial neural networks don’t actually learn anything. They calculate lots and lots of probabilities, and then feed those probabilities into other parts of the network that calculate more probabilities. The complex system’s weights (the importance of each probability) are adjusted until the inputs give the right outputs most of the time. The network doesn’t have consciousness, experience, or a conscience.

Rich example data can give good outcomes, but the only thing the artificial neural network “considers” is whether the outcome fits the example data it was adjusted to fit.

Human Thinking

Much research has gone into human thinking over the millennia. Knowing how humans learn and adapt, and how we make decisions, is of immense importance. The more we know about ourselves, the better we can cope with difficult situations.

A flowchart of John Boyd’s OODA Loop, by Patrick Edwin Moran (CC BY 3.0).

While human decision-making uses something like example data (our past experience and training), there’s a lot more going on under the hood. Our observations are fed into a much more complex “machine.” We look for patterns that match not only our past experience, but also new information we just learned, information from our culture and any religions we might have been raised with — and we are capable of using imagination to combine these different things before acting. U.S. Air Force Colonel John Boyd put this all in his famous “OODA Loop,” and it’s a loop because it’s a process that happens over and over.

We are also capable of coming up with good decisions in a hurry with limited information. Using what Malcolm Gladwell calls “thin slicing” in Blink, people have shown an ability to often make better decisions when rushed than they sometimes do after careful consideration. Our abilities to quickly identify relevant information, sometimes subconsciously, lead to “gut instincts” or “I have a bad feeling about this.”

We also have Theory of Mind. As social animals, knowing what other people are likely thinking is extremely important. We make decisions based not only on what others think, but what they will think if we act a certain way. We also think about what other people think we are thinking, or what other people think others are thinking. We even consider what other people think we are thinking they think, and what we think they think about that. It’s complex, but it comes naturally to us because cooperation, and sometimes deception, is important to our survival.

On the other hand, we aren’t perfect. Biases, bad cultural information, mental disorders, adrenaline, and intentional deception can all mess up our ability to make decisions.


How This Relates To Autonomous Vehicles

For self-driving cars, you give the software images of curbs and road, and it adjusts itself to categorize the curbs and roads properly, so it can identify where to drive and where to not drive (among many other things). The network doesn’t think about what other drivers are thinking (theory of mind), nor does it think about what other drivers think it might be thinking. Slowly passing another driver in the fast lane is okay, but it might not be okay if a line of cars starts getting stuck behind us. We understand that others may become frustrated and angry, and often adjust our behavior to avoid conflict. The software doesn’t have “gut feelings.” It doesn’t think about deceiving other vehicle’s AI systems. It doesn’t understand signs that another car’s driver is angry and teetering toward road rage.

Self-driving cars don’t look in the mirror at flashing red and blue lights wondering if they might represent a fake cop. It just pulls over as required by law, and doesn’t think to call 911 to first verify that the weird-looking cop car is real.

Some of the correct human responses to these situations might be in the training data for mimicry, but others are not.

On the other hand, the machine never gets tired. It never gets drunk. It doesn’t have bad days at the office, painful breakups, or road rage. It never checks social media while driving, and doesn’t make distracting phone calls or send texts.

Which system will be safer in the long run all boils down to whether a machine that’s limited but does what it does consistently is safer than less limited but imperfect humans.

Do you think I’ve been helpful in your understanding of Tesla, clean energy, etc? Feel free to use my Tesla referral code to get yourself (and me) some small perks and discounts on their cars and solar products. You can also follow me on Twitter to see my latest articles and other random things. 
 


 


Appreciate CleanTechnica’s originality? Consider becoming a CleanTechnica member, supporter, or ambassador — or a patron on Patreon.

Sign up for our free daily newsletter or weekly newsletter to never miss a story.

Have a tip for CleanTechnica, want to advertise, or want to suggest a guest for our CleanTech Talk podcast? Contact us here.


Latest Cleantech Talk Episode


Tags: , , , , , , , ,


About the Author

Jennifer Sensiba is a long time efficient vehicle enthusiast, writer, and photographer. She grew up around a transmission shop, and has been experimenting with vehicle efficiency since she was 16 and drove a Pontiac Fiero. She likes to explore the Southwest US with her partner, kids, and animals. Follow her on Twitter for her latest articles and other random things: https://twitter.com/JenniferSensiba Do you think I've been helpful in your understanding of Tesla, clean energy, etc? Feel free to use my Tesla referral code to get yourself (and me) some small perks and discounts on their cars and solar products. https://ift.tt/3jyQ3GM



Let's block ads! (Why?)



"machine" - Google News
November 27, 2020 at 11:56PM
https://ift.tt/2Vb1bzO

The Big Ways Full Self Driving & Machine Learning Differ From Our Brains - CleanTechnica
"machine" - Google News
https://ift.tt/2VUJ7uS
https://ift.tt/2SvsFPt

Bagikan Berita Ini

0 Response to "The Big Ways Full Self Driving & Machine Learning Differ From Our Brains - CleanTechnica"

Post a Comment

Powered by Blogger.