Tech Buzz

OPINION

Is Nvidia Tesla’s Kryptonite?

Tesla sure didn’t have a good week last week, given the kind of press coverage it got. (I just did a search and this was the top result when my search box autofilled “Tesla is going bankrupt” — apparently this is a popular search topic.) The company’s latest quarterly results were astonishingly bad. I’m not that worried about Tesla going away, though, as its products are far too popular for it to disappear. On the other hand, management clearly needs to be fixed.

What got me started looking at Tesla last week was that it pretty much announced that Nvidia was its Kryptonite. Here’s what I mean: In the case of Superman, he is pretty much indestructible, except for his vulnerability to the rare kryponite — fragments of his obliterated home planet Krypton that made it to Earth. Superman would never have a press event pointing out that if you wanted to kill him, this would be how to do it.

Yet that seemed to be what happened when Tesla spoke about its new self-driving car technology and pointed out, inaccurately, that what it had was better than Nvidia’s tech. That alone wouldn’t be enough to warrant a column. However, it occurred to me that we really haven’t set a bar for the minimum needed for true autonomous driving. When you talk about things that can weigh tons and move at freeway speeds, that is a huge oversight. However, what Tesla has proposed likely isn’t even close.

I’ll share some thoughts on Tesla, Nvidia and autonomous driving and close with my product of the week: something interesting from BlackBerry/Cylance that not only could make passwords obsolete but also could keep your PC from being compromised.

Tesla’s Comments

Tesla’s comments last week included some crowing about how its new self-driving computer was massively more powerful than Nvidia’s Drive Xavier platform. Tesla’s platform is 144 TOPS (trillion operations per second) while Nvidia’s Drive Xavier is only 21 TOPS (it actually does 30 TOPS, but this is a nit). This is comparable to Ford comparing the horsepower in its Mustang to Honda’s horsepower in its motorbike.

Nvidia Drive Xavier is a cruise control enhancement product — it isn’t self-driving at all. It does have an enhanced product — still mostly cruise control enhancing — that will do 160 TOPS. However, its autonomous driving platform, Nvidia Drive AGX Pegasus, currently does 320 TOPs — well over twice Tesla’s technology — and it may not be enough.

The other car makers — currently, most are working with Nvidia — know this. Tesla isn’t fooling anyone who counts. Tesla basically announced that if you want to make sure your autonomous driving technology is safer, better, smarter, and actually does self-driving, then you should buy Nvidia’s solution.

Oh, and if you do, it will beat the crap out of what Tesla plans to field. So, if you want to beat Tesla, Tesla basically pointed to Nvidia’s Drive AGX Pegasus as its kryptonite. Granted, after seeing this quarter’s financial results and the insane salary Elon Musk is getting, I’m thinking maybe Musk is a far bigger problem for the company than Nvidia ever could be.

Minimum TOPS

Now Nvidia seems to think that 160 TOPS isn’t enough for autonomous driving, and thus it has a 320 TOPS system. Given that this is a safety system, the focus should be on setting minimum performance requirements.

I’m a car guy myself, and I love to drive, but the reason I support autonomous driving is because of the thousands of lives it is forecast to save.

I currently don’t drive on holidays if I don’t need to because of concerns about drivers under the influence. I expect that like most of you, I’ve had close calls because some drivers are more interested in what is on their phones than in controlling their cars.

Still, autonomous driving has to be done right. The accidents by Autopilot-driven Teslas and some of the test cars highlight that if you don’t get this right, bad things happen. None of us want to replace one big problem with another. Thus, the focus needs to be predominantly on safety, I think — not who has the most operations per second.

This means attention must be given not only to sensors, but also vehicle design. I still think autonomous cars need to be designed more to optimize sensor efficiency than to make them look like conventional cars, which is the current approach. Further, the interaction between people and these systems needs to be thought through fully.

In that regard I’m concerned about Level 3 and 4 systems — the kind Tesla is promoting. Right now, if they get into trouble, they pass control over to a human. Because humans don’t process information at supercomputer speeds, it’s likely that these situations often will have bad outcomes.

I have a mental picture of kicking back in my self-driving car, reading a book, eyes drooping, when suddenly there’s an alert that it is giving control over to me. I look up only to see the huge grill of a semi coming at my windshield. My final words are “Oh, cra..”

Up through Level 4, I’m fonder of the Guardian Angel model that Toyota proposed. The driver stays engaged and the car just ensures that I don’t screw up . If the system takes control, it owns it. Level 5 cars shouldn’t even have user controls. People seem to prefer this anyway, according to Intel surveys, as it removes pressure from the passengers who apparently worry that they otherwise might have to step in.

Phony Autonomous Driving Scenarios

Before moving on, I want to share something I find really annoying about the challenges to autonomous driving that some people have been mounting, based on what they call “ethics rules.”

The typical argument goes something like this: “If an autonomous car is heading for a cliff and there are kids on both sides, it must be able to make the decision whether to kill the driver or kill the kids and save the driver.” A variant suggests a school bus instead of kids, but it is all BS.

How many times have you heard of a car heading for a cliff, or a school bus with kids on both sides, or kids and a wall, or any of this crap? The odds of this ever happening are almost impossibly small, and the odds of it happening to you fall well behind winning all the lotteries on the same day.

That doesn’t mean a car doesn’t need to make decisions — but it will virtually always do better than you would do. For instance, if a kid runs out from between two cars, chances are you will hit the kid or crash, because you just can’t think fast enough, generally won’t see the kid coming, and you likely will be driving too fast in a residential area.

An autonomous car might still hit the kid, but the odds are far better that it won’t, because it is thinking at computer speeds, it more often will see the kid coming, and it won’t be speeding. Think of it more like smarter anti-lock brakes. Yes, with anti-lock brakes we still have accidents, but there are likely a ton less of them because the issues associated with locking up brakes are largely removed.

In all cases, the computer will do better than you would do, because it can think faster and will see more, and it won’t be doing stupid things like speeding or reading email. The only real risk is that some company will cut corners on the system or the driver will override it.

Wrapping Up: Is Nvidia Tesla’s Kryptonite?

Not really. From what I can see, right now, Musk is Tesla’s kryptonite. The guy seems to be doing his level best to kill the company that he created, which brings up an old study I read in college. That study indicated that CEOs typically are good at one or two phases in a company’s creation but never three.

Those three phases are inception (when the company is first formed), transition (when the firm moves from inception to a sustaining enterprise) and sustaining (when the firm reaches a stable state). The issue, according to the study, is that startup CEOs tend to thrive on risk and excitement and micromanage to excess. They aren’t good with regulatory bodies because they generally don’t have to deal with them.

So, if they stay too long, they tend to become problems, because they create excitement, run afoul of regulatory bodies, and drive folks nuts with micro-management. I think that accurately describes the problems we are seeing with Elon Musk.

Nevertheless, Tesla needs to stop overpromising and underdelivering on safety features like autonomous driving. People are getting hurt, and that isn’t the way to treat one of the most loyal customer groups on the planet. It isn’t a good way to treat anyone, really. Do it right or don’t do it at all.

Rob Enderle's Product of the Week

To say I hate passwords with a passion would be an understatement. Back in the 1980s, we did a study at IBM to determine where the greatest vulnerability existed in every company, and passwords won by a significant margin.

Here we are, more than 30 years later, and we are still talking about how bad passwords and IDs are, and they are still the predominant way we gain access to our stuff. Even with all of the breaches, the passwords “123456” and “PASSWORD” are still incredibly common. Even if we have a good one, the ease with which someone can trick us into giving up our password is embarrassing.

The BlackBerry analyst event last week featured what may be one of the best solutions I’ve ever seen. Called “CylancePERSONA,” it uses AI to determine who you are. This isn’t just for log in purposes — it can tell if someone else got access to your laptop or smartphone and lock them out.


CylancePersona from BlackBerry

CylancePersona from BlackBerry

Basically, the tool monitors you for several days and then builds a profile on how you normally work. For instance, you probably don’t regularly start uploading lots of files, start encrypting entire volumes, or start wandering through random emails. You have a cadence that distinguishes how you type, a normal typing speed, and you use a mouse or touchscreen surprisingly consistently.

In addition, you likely have your phone with you when you work on your PC, and you generally are not in two places at once, so the system keeps track of where you are. Once it learns you, if someone else gains access to your system or tries to log on remotely, it identifies that it isn’t you within a few keystrokes or actions, and it locks the intruder out of your PC. If there’s an attempt to access your stuff, say, from Nigeria — and it knows you are in New York — that login attempt will fail.

Like most current AI systems, the false negatives and false positives are exceedingly low. This effectively creates a dual or multiple factor authentication system that can move with you system to system. Even if you have the crappiest ID and password on the planet, it still can ensure that you aren’t compromised.

Because being both safe and secure is important to me, CylancePERSONA is my product of the week.

The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.

Rob Enderle

Rob Enderle has been an ECT News Network columnist since 2003. His areas of interest include AI, autonomous driving, drones, personal technology, emerging technology, regulation, litigation, M&E, and technology in politics. He has an MBA in human resources, marketing and computer science. He is also a certified management accountant. Enderle currently is president and principal analyst of the Enderle Group, a consultancy that serves the technology industry. He formerly served as a senior research fellow at Giga Information Group and Forrester. Email Rob.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

How confident are you in the reliability of AI-powered search results?
Loading ... Loading ...

Technewsworld Channels