Tech Law

OPINION

The Importance of Microsoft’s 5-Point Blueprint for Public Governance of AI

artificial intelligence
In the rapidly evolving world of artificial intelligence, whom should we trust to govern its use? Microsoft presents intriguing answers in its "Governing AI: A Blueprint for the Future" report.

Many technology leaders agree that while AI could be hugely beneficial to humans, it could also be misused or, through negligence, terminally damage humanity. But looking to governments to address this problem without guidance would be foolish given that politicians often don’t even understand the technology they’ve used for years, let alone something that just made it to market.

As a result, when governments act to mitigate a problem, they could do more damage than good. For instance, it was right to penalize the old Shell Oil Company for abuses, but breaking the company up shifted control of oil from the United States to parts of the world that aren’t all that friendly to the U.S. Another example was correcting RCA’s dominance of consumer electronics, which shifted the market from the U.S. to Japan.

The U.S. has held on to tech leadership by the skin of its teeth, but there is no doubt in my mind that if the government acts without guidance to regulate AI, they’d simply shift the opportunity to China. This is why Microsoft’s recent report titled “Governing AI: A Blueprint for the Future” is so important.

The Microsoft report defines the problem, outlines a reasonable path that won’t reduce U.S. competitiveness, and addresses the concerns surrounding AI.

Let’s talk about Microsoft’s blueprint for AI governance, and we’ll end with my Product of the Week, a new line of trackers that can help to keep track of things we often have trouble locating.

EEOC Example

It is foolish to ask for regulation without context. When governments react tactically to something it knows little about, it can do more damage than good. I opened with a couple of antitrust examples, but perhaps the ugliest example of this was the Equal Employment Opportunity Commission (EEOC).

Congress created the EEOC in 1964 to rapidly address the very real problem of racial discrimination in jobs. There were two fundamental causes for workplace discrimination. The most obvious was racial discrimination in the workplace which the EEOC could and did address. But an even bigger problem existed when it came to discrimination in education, which the EEOC didn’t address.

When businesses hired on qualification and used any of the methodologies the industry had developed at the time to reward employees with positions scientifically, raises, and promotions based on education and accomplishment, you were asked to discontinue those programs to improve your company diversity which too often put inexperienced minorities into jobs.

By placing inexperienced minorities in jobs they weren’t well trained for, the system set them up to fail, which only reinforced the belief that minorities were somehow inadequate when in fact, to begin with, they weren’t given equal opportunities for education and mentoring. This state of affairs was not only true for people of color but also for women, regardless of color.

We can now look back and see that the EEOC didn’t really fix anything, but it did turn HR from an organization focused on the care and feeding of the employees into an organization focused on compliance, which too often meant covering up employee issues rather than addressing the problems.

Brad Smith Steps Up

Microsoft President Brad Smith has impressed me as one of the few technology leaders who thinks in broad terms. Instead of focusing almost exclusively on tactical responses to strategic problems, he thinks strategically.

The Microsoft blueprint is a case in point because while most are going to the government saying “you must do something,” which could lead to other long-term problems, Smith has laid out what he thinks a solution should look like, and he fleshes it out elegantly in a five-point plan.

He opens with a provocative statement, “Don’t ask what computers can do, ask what they should do,” which reminds me a bit of John F. Kennedy’s famous line, “Don’t ask what your country can do for you, ask what you can do for your country.” Smith’s statement comes from a book he co-authored back in 2019 and referred to as one of the defining questions of this generation.

This statement brings into context the importance and necessity of protecting humans and makes us think about the implications of new technology to ensure that the uses we have for it are beneficial and not detrimental.

Smith goes on to talk about how we should use technology to improve the human condition as a priority, not just to reduce costs and increase revenues. Like IBM, which has made a similar effort, Smith and Microsoft believe that technology should be used to make people better, not replace them.

He also, and this is very rare these days, talks about the need to anticipate where the technology will need to be in the future so that we can anticipate problems rather than constantly and tactically merely respond to them. The need for transparency, accountability, and assurance that the technology is being used legally are all critical to this effort and well spelled out.

5-Point Blueprint Analysis

Smith’s first point is to implement and build on government-led AI safety frameworks. Too often, governments fail to realize they already have some of the tools needed to address a problem and waste a lot of time effectively reinventing the wheel.

There has been impressive work done by the U.S. National Institute of Standards and Technology (NIST) in the form of an AI Risk Management Framework (AI RMF). It is a good, though incomplete, framework. Smith’s first point is to use and build on that.

Smith’s second point is to require effective safety brakes for AI systems that control critical infrastructure. If an AI that is controlling critical infrastructure goes off the rails, it could cause massive harm or even death at a significant scale.

We must ensure that those systems get extensive testing, have deep human oversight, and are tested against scenarios of not only likely but unlikely problems to make sure the AI won’t jump in and make it worse.

The government would define the classes of systems that would need guardrails, provide direction on the nature of those protective measures, and require that the related systems meet certain security requirements — like only being deployed in data centers tested and licensed for such use.

Smith’s third point is to develop a broad legal and regulatory framework based on the technology architecture for AI. AIs are going to make mistakes. People may not like the decisions an AI makes even if they’re right, and people may blame AIs for things that the AI had no control over.

In short, there will be much litigation to come. Without a legal framework covering responsibility, rulings are likely to be varied and contradictory, making any resulting remedy difficult and very expensive to reach.

Thus, the need for a legal framework so that people understand their responsibilities, risks, and rights to avoid future problems, and should a problem result, find a quicker valid remedy. This alone could reduce what will likely become a massive litigation load since AI is pretty much in the green field now when it comes to legal precedent.

Smith’s fourth point is to promote transparency and ensure academic and nonprofit access to AI. This just makes sense; how can you trust something you can’t fully understand? People don’t trust AI today, and without transparency, they won’t trust it tomorrow. In fact, I’d argue that without transparency, you shouldn’t trust AI because you can’t validate that it will do what you intend.

Furthermore, we need academic access to AI to ensure people understand how to use this technology properly when entering the workforce and nonprofit access to ensure that nonprofits, particularly those focused on improving the human condition, have effective access to this technology for their good works.

Smith’s fifth point is to pursue new public-private partnerships to use AI as an effective tool to address the inevitable societal challenges that will arise. AI will have a massive impact on society, and ensuring this impact is beneficial and not detrimental will require focus and oversight.

He points out that AI can be a sword, but it can also be used effectively as a shield that’s potentially more powerful than any existing sword on the planet. It must be used to protect democracy and people’s fundamental rights everywhere.

Smith cites Ukraine as an example where the public and private sectors have come together effectively to create a powerful defense. He believes, as I do, that we should emulate the Ukraine example to ensure that AI reaches its potential to help the world move into a better tomorrow.

Wrapping Up: A Better Tomorrow

Microsoft isn’t just going to the government and asking it to act to address a problem that governments don’t yet fully understand.

It is putting forth a framework for what that solution should, and frankly must, look like to assure that we mitigate the risks surrounding AI use upfront and that, when there are problems, there are pre-existing tools and remedies available to address them, not the least of which is an emergency off switch that allows for the elegant termination of an AI program that has gone off the rails.

Whether you are a company or an individual, Microsoft is providing an excellent lesson here on how to get leadership to address a problem, not just toss it at the government and ask them to fix it. Microsoft has outlined the problem and provided a well-thought-out solution so that the fix doesn’t become a bigger problem than the problem was in the first place.

Nicely done!

Tech Product of the Week

Pebblebee Trackers

Like most people, my wife and I often misplace stuff, which seems to happen the most when we rush to get out of the house and put something down without thinking about where we placed it.

In addition, we have three cats, which means the vet visits us regularly to care for them. Several of our cats have discovered unique and creative places to hide so they don’t get their nails clipped or mats cut out. So, we use trackers like Tile and AirTags.

But the problem with AirTags is that they only really work if you have an iPhone, like my wife, which means she can track things, but I can’t because I have an Android phone. With Tiles, you either must replace the device when the battery dies or replace the battery, which is a pain. So, too often, the battery is dead when we need to find something.

Pebblebee works like those other devices yet stands out because it’s rechargeable and will either work with Pebblebee’s app, which runs on both iOS and Android. Or it will work with the native apps in those operating systems: Apple Find My and Google Find My Device. Sadly, it won’t do both at the same time, but at least you get a choice.

Pebblebee trackers

Pebblebee Trackers: Clip for keys, bags and more; Tag for luggage, jackets, etc.; and Card for wallets and other narrow spaces. (Image Credit: Pebblebee)


When trying to locate a tracking device, it beeps and lights up, making things easier to find at night and less like a bad game of Marco Polo (I wish smoke detectors did this).

Because Pebblebee works with both Apple and Android and you can recharge the batteries, it addresses a personal need better than Tile or Apple’s AirTag — and it is my Product of the Week.

The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.
Rob Enderle

Rob Enderle has been an ECT News Network columnist since 2003. His areas of interest include AI, autonomous driving, drones, personal technology, emerging technology, regulation, litigation, M&E, and technology in politics. He has an MBA in human resources, marketing and computer science. He is also a certified management accountant. Enderle currently is president and principal analyst of the Enderle Group, a consultancy that serves the technology industry. He formerly served as a senior research fellow at Giga Information Group and Forrester. Email Rob.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

More by Rob Enderle
More in Tech Law

Technewsworld Channels