Tech Buzz

OPINION

AI for President

Andy Rubin, Android’s daddy, last week made some interesting comments about quantum computing and artificial intelligence. The part I agree with is it won’t be long until most things we have are connected to an intelligent machine. (When referring to something that will be far smarter than we are, the use of the term “artificial” would not just be inaccurate — it would be rude.)

I disagree that there will be only one, however, because competition, latency, governments, uses (you don’t want a defense system controlling your air conditioning for instance), and privacy concerns alone will ensure there are many.

However, the recent tragedy in Orlando and the poorly thought through responses by both presidential candidates got me thinking about what it would be like if we turned governing over to an AI.

I’ll share my thoughts on that this week and close with my product of the week: a new video card from AMD targeted at virtual reality for a very reasonable US$200.

AI And Orlando

The political response to the Orlando mass shooting by both candidates unfortunately was typical — a return to talking points already established, and no real effort to map existing resources to prevent recurrence. Trump spoke to an even greater ban on Muslims entering the country, even though the current attack was by a U.S. native, and Clinton returned to her talking points on gun control, even though it is clear the controls already in place not only worked, but also didn’t have an impact.

As we’ve seen with the war on drugs and prohibition, increased regulation into illegality tends only to create a stronger criminal element, which in this case, directly contradicts the primary goal of saving lives.

A properly programmed AI (note the “properly programmed” part, as there is a growing concern that an AI improperly programmed could become an even greater problem) would start with the data and likely conclude the following: that the crime could be mitigated if the various databases that define people digitally were cross-connected better and a solution were structured to flag and resource people likely to become mass killers; and that the current criminal system, which is based on properly assigning blame, should be modified to focus on prevention — and the effort would have to be resourced adequately.

Any behavioral traits that consistently lead to violence would be flagged digitally, and the AI then would determine which people were clear and present dangers, and define a set of corrective actions –from mandatory anger management to removal from the general population.

The Result

Once the AI system became connected and resourced appropriately, anyone buying large amounts of ammunition and an assault rifle would be flagged. Anyone using hate speech against anyone would be flagged. Anyone with a history of domestic violence would be flagged, and anyone who appeared to be aligning with a hostile entity would be flagged.

When two of those elements were identified in the same person, that individual would be added to a list for investigation. Three or more would trigger prioritization for corrective action and surveillance. Anyone exhibiting all of those traits would be classified as a clear and present danger and prioritized for immediate mitigation. That would have prevented Orlando — and if it didn’t, the focus would be on figuring out why and fixing it, in that order, so things would get better as opposed to what we generally have now, which is closer to stalemate.

Blocking all Muslims would be a massive wasted effort (the majority of mass shootings in the U.S. have not been carried out by Muslims). Banning the legal sale of weapons would force the purchases underground, eliminating the flags — data — now associated with legal purchases. Also, in areas where guns were less affordable, the alternative might be explosives, which typically are harder to track, as there generally is no legal way for average citizens to buy explosives in most countries.

In short, the government would commission the massive intelligence-gathering data center in Utah to flag people who met a set of conditions, identifying them as threats before they could commit an act of mass violence. A mitigating procedure would be in place to eliminate the threats. If it didn’t work, the failure would constitute a learning moment, and the system would take corrective action iteratively until it met with success.

The goal would be to fix the problem — not to persuade people to agree. An AI, at least initially, would care little about appearing right. It would be laser-focused on doing the statistically least difficult thing to solve the problem.

If the AI saw the NRA as a problem, it would design a plan to fix it — likely by focusing on eliminating gun company influence — but it wouldn’t just blame the NRA and figure that was making progress. There are easier and more effective things it could do anyway. The properly programmed AI always would look for the easiest effective path to a real solution.

By the way, when folks look into this without bias, they seem to find we don’t have a gun problem — or, more accurately, guns aren’t the problem we actually need to fix — we have a data problem.

Folks with a biased view are more interested in sticking it to folks who disagree than in trying to solve what is actually a fixable problem.

AIs vs. Politicians – and People in General

As I write this, I wonder if we shouldn’t refer to the coming wave of machines as “intelligent machines” and humans as “artificially intelligent.” Machines will start with facts and generally be designed to factor in all evidence before making a decision. However, with people — and this is apparent with Trump and Clinton — the tendency is to make the decision first, and then just collect the data that proves you made a good one.

This is evident in the argument between President Obama and Trump. Trump argued that Obama was more concerned with Trump than with fixing the problem, which actually is correct, given that the actual fix is within the president’s authority (adjusting monitoring systems to flag threats). Both men are focused on who appears right rather than on fixing the problem.

When working on a spreadsheet, have you ever gotten into an argument with your computer over who made a mistake? How about with your accountant? Computers don’t care about appearances. They do care about data, though, and if that data is bad or their programming is corrupted, then they can make errors — but they still often do better than their human counterparts. We ignore the data.

Wrapping Up: Machine Intelligence for President?

We’re not yet ready to put an AI in the highest office of the U.S., but that may be the only way we survive into the next century. It also could be the way we end the human race. You see, the other problem I haven’t yet touched on is that people are creating these machine intelligences, and that means some of them will be corrupted by design, so that they don’t do anything that disagrees with their creators’ world view.

That means there literally will be insane machine intelligences, because they were improperly programmed on purpose. The chance of putting one of those things into power unfortunately is very high.

For instance, look how we deal with drone mistakes. We don’t call collateral damage “collateral damage” — we reclassify the dead as “combatants.” Can you image a smart weapon with that programming? Suddenly everyone would be a target, and we’d have designed a Terminator future.

Unfortunately, what that means is that unless we fix ourselves — which is really unlikely — we are rather screwed.

Rob Enderle's Product of the Week

The entire tech industry is hoping that — at least on the consumer side of the market — VR takes off like a rocket. Obstacles include a lack of content and the problem that cellphone-based solutions aren’t very good. PC based-solutions are wickedly expensive, and there is a very real likelihood you’ll hurt yourself if you don’t sit down when using them — you can either lose your balance or trip over the necessary tether.

Well, at last week’s E3, AMD stepped up to address the first problem with an impressive $200 graphics card, the Radeon RX480. It is premium VR certified, and you should be able to add it to your existing Windows 10 desktop PC to make it capable of supporting VR.

AMD Radeon RX480

AMD Radeon RX480

I played with the Radeon RX480 a few weeks ago in Macau, and it is an impressive piece of work. What allowed the company to reach the low price point was that it focused on things that would make VR work better — and that approach paid off in spades.

Similar technology is rumored to be going into Microsoft’s Xbox Project Scorpio, which suggests that a gaming system on steroids could be surprisingly affordable when it comes out next year (Xboxes typically sell at or below cost) and ideal for the VR games expected to arrive with that console.

However, the Radeon RX480 is due in stores at the end of the month, so you don’t have to wait that long.

I’m always up for a value, and when it comes to graphics the AMD Radeon RX480 should be one of the biggest bargains in the VR or desktop PC segment — at least for now — and it therefore is my product of the week.

Rob Enderle

Rob Enderle is a TechNewsWorld columnist and the principal analyst for the Enderle Group, a consultancy that focuses on personal technology products and trends. You can connect with him on Google+.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

More by Rob Enderle
More in Tech Buzz

Technewsworld Channels