Making Change Happen Every Day: Q&A With GSA’s David McClure

The U.S. government spends US$80 billion annually on information technology. The U.S. General Services Administration (GSA) is directly involved in nearly 25 percent of federal IT procurement activities through its Schedule 70 acquisition program, including nearly $9 billion directly for information technology investments.

GSA has emerged as a leader in guiding federal investments for information technology, a role that has been enhanced with its responsibilities for implementing the Obama Administration’s “Open Government” initiative. In May 2010, GSA created its Office of Citizen Services and Innovative Technologies (OCSIT), while also retaining an Office of Communications and Marketing.

OCSIT is rapidly becoming a leader in the use of new media, Web 2.0 technologies, and cloud computing, to proactively make government agencies and services more accessible to the public. Through websites such as USA.gov, call centers, publications and other programs, OCSIT facilitates more than 200 million citizen touchpoints a year.

David L. McClure, who had been an associate administrator of GSA since August of 2009, was appointed the first associate administrator for the Office of Citizen Services and Innovative Technologies.

Prior to joining GSA in 2009, McClure most recently served as the managing vice president for Gartner’s government research team. Before Gartner, McClure served as vice president for e-government and technology at the Council for Excellence in Government. Previously, McClure had an 18-year career with the Government Accountability Office.

In this exclusive interview, McClure shared with the E-Commerce Times how he sees GSA’s role as a leading force in advancing the use of information technology in the federal government.

E-Commerce Times: Why did you take this job, knowing full well the challenges of a huge federal constituency, numerous Congressional oversight committees, appropriations uncertainties, and the myriad components of the federal establishment?

David McClure:

First of all, I was on the Obama transition team, and I got to see the inside view as to where the administration was going with some of the technology issues in government. That indicated this was going to be an exciting and different time for the use of technology in government. I knew the president himself, personally, was very much a driver behind innovation and creativity in government, and we have seen that actually in his first 18 months with the whole agenda of openness, transparency and participation in government being linked to some of the new technology areas.

I figured I’ve been down these roads before, and with some of this experience, I could help the new team coming in on navigating some of these choppy waters, like oversight and appropriations. So it was a combination really of new directions and challenges, a spirit of enthusiasm, and almost an entrepreneurial-type attitude that made it a good place for me to be right now.

ECT: What would you say were your top two or three personal goals in taking this job, and how has GSA met those goals thus far?


I always viewed GSA as being in the driver’s seat for intergovernmental, cross-agency service delivery. It’s a natural for GSA — it’s always been government-wide in its focus. And certainly, this is a time when technology is being used at the enterprise level to gain efficiencies, and to involve customer interactions that were not really a priority in the past.

So I think GSA is really in a position to accelerate the adoption of technologies and to push agencies into areas where GSA can take a lead in identifying the solutions and help to mitigate the risks. So my goal was to come into government and land in a spot where the focus would be creativity, innovation, pushing the envelope, moving quickly, and trying to make change happen on a daily basis — and that’s kind of how I’ve been running this office.

ECT: What do you mean by operating more at an “enterprise level” than the way things were run before?


We have that traditional problem of what we call “silos” or “cylinders” of excellence, and what we need to do is to start looking more broadly at the enterprise level on what problems exist, on what opportunities exist, and where our accomplishments can be. That requires a corporate or enterprise view on what is going on in the organization, whether that’s an entire government department or whether it’s a specific agency.

We are very good in government at looking narrowly and focusing on single programs. But we find it difficult to step back and sort of have that portfolio view, and assess opportunities and areas where we can do away with some investments, and accelerate innovation in some higher-impact areas.

ECT: Rightly or wrongly, government is often criticized for being the last one to adopt innovation. How do you see the opportunities for having federal agencies be on the cutting edge of electronic commerce?


Well, this is a new time in government. The administration is pushing agencies to be innovative and creative, and to think outside the box. So a lot of the push behind the open government directive, for example, has been to challenge the agencies to be more transparent and to deliver services to citizens not only in more cost-effective ways but in more meaningful ways. And as a result, we have some very interesting things going on in government. So the attitude, if you will, is opening up the Net and allowing a greater level of input of ideas, and actually asking the people who are the recipients for their views.

It’s not just asking people in a GSA office or department what their ideas are for improving a service, but to seek input from the customers — the citizens. It’s a new way of operating for government to have a continuing dialog with its constituency base. It’s not just an occasional survey or phone call or focus group. It’s an ongoing process, and it becomes just part of the way you do business to build in feedback that will generate improved results in service delivery to the customer.

The Challenges and Prizes program is one example of this, where agencies are asking citizens or individuals to almost operate as entrepreneurs and partners with government. So the agency can say to the public: “Here’s our business problem. Help us solve this.” This kind of opens the funnel for ideas on how government can become more meaningful in the everyday lives of citizens. This is one of the goals of the administration: to show the relevance of government and demonstrating the operational excellence of government, and with that comes higher trust levels.

ECT: Prior to this job at GSA, you worked at Gartner for a time. Were you able to bring back any private sector approaches from Gartner when you returned to government service at GSA?


Yes, definitely. At Gartner, I was running the public sector research effort, so I was still connected with government operations within a private-sector setting. So my experience was kind of a blend. Gartner was focusing on business issues and technology issues to get its clients to understand the intersection of the two.

And I think that’s what we brought back to government, using an approach that asks how are we solving real business problems in government, or solving customer service needs, rather than introducing technology because it’s kind of cool or interesting. That leads to taking an approach where we ask ourselves at the end of the day, what have we improved in the cost area, what have we improved from a benefit angle and what have we done to improve the quality of the services we provide.

ECT: Were there any performance metrics you could bring back from Gartner — like the “Magic Quadrant” analysis or ROI — to GSA and government service?


Well, we can apply a version of those metrics, but we have to tweak them somewhat for government. In the commercial sector, the focus is on revenues and margins, but in government the focus is on public service. Government has to concern itself with costs and revenues and budget outcomes, but it is not inherently in a profit mode. The primary metric has to be showing value for money from a citizen angle or a public benefit angle.

ECT: Can you give two or three examples of where GSA in particular has facilitated innovation related to information technology?


Sure. We’ve made a big push into the mobile applications. We think a lot of services that government provides, and a lot of information that the government makes available, should be accessible on mobile devices. It’s really just the way the world is going. People are interacting more and more on their mobile devices for social needs and increasingly for their consumer needs as well. You actually have a better chance of reaching someone on a mobile device than on a desktop, a laptop or a notebook, because people carry mobile devices around all the time.

The penetration rate of the mobile market is quite high and growing, and there is even greater potential for smartphones, which can interact with the Internet and accept data. So in that space, we’ve got to start building applications and delivering services. We’ve just created 21 applications on USA.gov, for example, where you can get air travel wait times, or product recall notices on food or consumer items, or applications to help you make calculations when you are shopping. We’re looking to put up things that are of practical value.

Another area is in improving the search engine capability. Increasingly, people are interacting with the government through the search mode, rather than visiting individual agency websites. So they will go to USA.gov or Google or Bing to search for government services. We are creating a more robust government search capability that has some technical algorithms that will help focus the search.

So if someone is typing in the word “food,” it will automatically trigger a menu of related references, like “food safety,” as the person is typing and searching. It makes it easier and faster for helping people go where they need to go. So to me, that’s enhancing citizen participation in government, and I think it pays some dividends on technology investments.

ECT: One emerging technology is the cloud. Can you describe how the cloud fits into government and GSA’s role here?


The use of the cloud is inevitable. We are moving to cloud solutions in the technology market in general, so government can’t pretend it’s in a market all to itself. You look at the projections from all the research organizations and they all predict a healthy take-up in cloud investments in the next five to 10 years. What we need to do in government is to figure out how to utilize the power of cloud technology and then use it appropriately. It’s not for everything.

GSA needs to play a role in the effective provisioning of cloud services to agencies so they have access to cloud solutions via our procurement schedules and our services. Secondly, GSA needs to step up and think of itself as a viable cloud services provider, so whether its storage or software or services, our role would be to leverage the cloud government-wide. That’s a role for GSA, and that’s one of the advantages of cloud computing, in that it offers enormous opportunities for economies of scale.

If we are able to offer storage space from multiple agencies and do it at tremendous cost savings and then manage that infrastructure for federal agencies, I think that’s a win-win situation for GSA and for our agency partners.

ECT: How do you see the role of GSA in the procurement of information technology? Do you see GSA more directly involved in IT procurement or in acting as a facilitator to federal agencies under a decentralized procurement approach?


Well, I think a valid case can be made for GSA to act as a broker in using procurement vehicles for setting up common [IT] services across agencies, such that we gain at least the potential for cost savings through economies of scale. This isn’t to say that there shouldn’t be improvements in how GSA is running these procurement functions and I know that the commissioner of the FAS (Federal Acquisition Service) is looking at those [procurement] schedules to see how well they operate in generating cost savings. It doesn’t mean that this process should be used for everything.

There are some unique needs — or niche needs or lower volume requirements — where government-wide procurement is inappropriate. But I don’t think we’ll go back to heavy decentralization. If anything, we know the efficiencies that can result with creating a common identification of IT needs — and then obtaining these services from the commercial side by using a high volume of acquisitions as a driver for getting discounts.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Related Stories
More by John K. Higgins
More in Exclusives

Technewsworld Channels


Leapwork CEO: No-Code Platforms Democratize Testing Automation


Using no-code technology instead of dedicated code programmers could become the future of software development in retail marketing and related software-building industries. But it is not a one-size-fits-all solution for all use cases.

No-code, an approach to creating software applications that require little in the way of programming skills, lets workers within a business create an application without formal programming knowledge or training in any particular programming language.

In a nutshell, no-code platforms enable users to create software applications such as online forms or even a fully functional website or add functionality to an existing site or app.

It is important to clarify that numerous different applications of no-code platforms exist, according to Christian Brink Frederiksen, CEO of Leapwork, a global provider of automation software.

No-code platforms are fairly new. So companies planning to adopt a no-code approach must thoroughly vet and test no-code tools on the market to make sure that the selected products live up to their claims.

“A lot of platforms out there today claim to be but are not truly no-code at all, or lack the power required to do what they say they’ll do without additional coding,” he told TechNewsWorld.

Leapwork developed a test automation product that is accessible and easy to maintain. Its secret sauce provides rapid results at a lower cost, requiring fewer specialist resources than traditional test automation approaches.

“At Leapwork, we have democratized automation with our completely visual, no-code test automation platform that makes it easy for testers and everyday business users to create, maintain, and scale automated software tests across any kind of technology,” noted Frederiksen. That enables enterprises to adopt and scale automation faster.

Security Remains Top Concern

An obvious inquiry about no-code platforms should consider how no-code technology addresses security problems that plague both proprietary and open-source programming.

If well designed, no-code platforms can be safe and secure, Frederiksen said. When manually coding from scratch, it is easy to introduce bugs and vulnerabilities that hackers can exploit.

“Because the no-code platforms are designed to automate the creation of an app or perform a function in an automated way, they are inherently much more consistent,” he explained.

Of course, the no-code platform itself needs to be secure. Before choosing a solution, organizations should conduct a thorough security audit and opt for a solution that is ISO-27001 and SOC-2 compliant, he recommended.

Coding Pros and Non-Pros Alike

No-code platforms are not primarily just for programmers or for IT coders to use in-house in lieu of outsourced software developers. Both use cases come into play successfully.

No-code platforms are certainly useful for IT coders and programmers, but the primary value of a no-code test platform is to extend the capability to create and test applications to people who are not trained as software developers, offered Frederiksen.

For example, Leapwork makes it simple for testers and everyday business users to set up and maintain test automation at scale. This empowers quality assurance teams to experience shorter test cycles and immediate return on investment.

Advantages for DevOps

Speeding up testing is a huge benefit, noted Frederiksen, because hand-coding creates a big bottleneck, even for an experienced DevOps team. While testers are extremely skilled at designing tests and understanding the underlying complexity of software, they are not traditionally trained to code.

He offered a good example.

Claus Topholt, Leapwork’s co-founder and chief product officer, worked at an investment bank before joining Frederiksen to found Leapwork in 2015. Testing was vital because the bank depended on high-volume rapid trading. If software quality was poor, it could literally cause the institution to go bankrupt.

“Claus decided to build a simplified programming language to build tests so that the testers could set them up, speeding up the process. But he quickly discovered that testing and programming are totally different domains, and, frankly, it’s not fair to force testers, who are already highly skilled, to learn the extremely complicated skill of programming,” explained Frederiksen.

During a discussion with the testing team, Claus and his colleagues started to use a whiteboard to draw a flow chart. Everyone immediately understood what it meant.

Lesson Learned

The flow chart was such a simple, clear way of expressing something complicated. So, it was obvious this model was the way forward for enabling testers to create their own sophisticated tests without coding.

“The lesson was, if you give testers something as intuitive as a flow chart to create automated tests, you’ll save a lot of time and remove bottlenecks, as you’re not relying on the time and expertise of developers,” said Frederiksen.

Claus left the investment bank to found Leapwork and created what became a no-code platform. They built a visual language that enables business users to automate testing using a flowchart model.

Leapwork co-founders Claus Topholt and Christian Brink Frederiksen

Leapwork CPO and Co-Founder Claus Topholt (L) | Leapwork CEO and Co-Founder Christian Brink Frederiksen (Image Credit: Leapwork)

“It democratizes automation because it is so easy for non-coders to use and maintain, which in turn empowers businesses to scale their automation efforts and accelerate the development process,” Frederiksen said.

No-Code Q&A

Headquartered in Copenhagen, Denmark, last year Leapwork raised $62 million in the largest-ever Series B funding round in Danish history. The round was co-led by KKR and Salesforce Ventures.

Leapwork is used by Global 2000 companies — including NASA, Mercedes-Benz, and PayPal — for robotic process automation, test automation and application monitoring.

We asked Frederiksen to reveal more details about the inner workings of the no-code solution.

TechNewsWorld: How can companies add automation into their testing processes?

Christian Brink Frederiksen: One way is to incorporate automated tests as an integral part of moving from one stage of the release process to another.

For example, when a developer checks in code to the development server, a series of automated tests should be triggered as part of the same process that generates the build.

These regression tests can identify big bugs early, so the developer can fix them quickly, while the code is still fresh in the developer’s mind.

Then, as the code moves to test and, ultimately, production, again, a series of automated tests should be triggered: extensive regression testing, verification of its visual appearance, performance, and so on.

It is critical that business users — like a business analyst or a tester in a QA department — have the ability to implement this automation. That is where no-code is so vital.

How does no-code differ from low-code solutions?

Frederiksen: No-code truly involves no code at all. If you want non-developers to use the platform, then you need it to be no-code. Low code can speed up development, but you will still need someone with developer skills to use it.

Which is more beneficial for enterprise and DevOps, no-code or low-code?

Frederiksen: No-code empowers enterprises and DevOps teams to implement automation at scale, ultimately increasing software delivery performance. Low-code solutions still require you to know how to code in order to maintain software.

No-code allows anyone to automate workflows. Using no-code, developers and technically skilled workers can focus on high-value tasks, and QA professionals such as testers can automate and maintain testing quickly and easily.

Surveys have shown that testing is what slows down the development process the most. If you want to have a serious impact on DevOps, you should really consider using a no-code platform.

Does no-code pose a threat to software and website developers?

Frederiksen: I would argue quite the opposite. No-code has the potential to open up new opportunities for developers. More software is being built and customized than ever before, and yet we are in the midst of an acute developer shortage with 64% of companies experiencing a shortage of software engineers.

Rather than relying on code-based approaches and forcing businesses to search for talent externally, no-code allows companies to harness their existing resources to build and test software. Technical resources are then free to focus on more fulfilling, high-value work, such as accelerating innovation and digital transformation.

Where do you see no-code technology going?

Frederiksen: AI is a powerful technology, but its short-term impacts are slightly overhyped. We believe the challenge limiting the capabilities of artificial intelligence today is human-to-AI communication.

It should be possible to tell a computer what it is you want it to do without having to explain in any technical detail how to do it. Essentially, we need to be able to give an AI the requirements for a task, and then the AI can handle the rest.

We have made a lot of progress on this problem at Leapwork. There is a lot more work to be done.

Jack M. Germain

Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open-source technologies. He is an esteemed reviewer of Linux distros and other open-source software. In addition, Jack extensively covers business technology and privacy issues, as well as developments in e-commerce and consumer electronics. Email Jack.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Related Stories

Unresolved Conflicts Slow eSIM Upgrade Path to Better IoT Security

IoT internet of things

Misconceptions about embedded SIM cards (eSIMs) for IoT are keeping companies from adopting this new technology. That is detrimental, as eSIMs are crucial for patching and successful secure IoT deployment.

eSIMs are slowly replacing standard SIMs in IoT devices and products such as smartwatches. They are also making their way into the machine-to-machine world.

The rollout, however, is slowed by unresolved conflicts between competing technical standards and tightened restrictions on data management regulations globally. Despite the need for better IoT device security, clearing the adoption roadblocks is less than likely anytime soon.

Machine-to-machine, or M2M, is a broad label that can be used to describe any technology that enables networked devices to exchange information and perform actions without the manual assistance of humans.

Controversial Technology

Led mostly by the automotive and transportation industries, eSIMS also contribute to tracking functions in health care, smart mobility, utilities, and other sectors. But eSIM technology so far remains controversial, noted Noam Lando, CEO and co-founder of global connectivity provider Webbing.

Webbing provides an enterprise-grade solution for Fortune 500 and IoT/M2M companies, as well as an embedded solution for various manufacturers across the globe. The deployment is part of a phasing process to ensure a secured and continuous internet connection for all devices, no matter where they are in the world.

Lando said that “eSIM technology is a game-changer in telecom. It completely digitizes the cellular subscription provisioning process. As with any technology that is disruptive, there are a lot of debates and discussions around it to better understand its benefits, dispel misconceptions, and its impact on accelerating IoT use cases.”

Why all the Fuss?

We asked Lando to go below the circuit boards to reveal why eSIM technology is creating such an industry-wide furor.

TechNewsWorld: Is the technology upgrade to eSIMS worth the ongoing unrest?

Noam Lando: eSIM technology promises the establishment and maintenance of cost-effective connectivity that is accessible anywhere in the world regardless of where the device is manufactured or deployed as well as ultimate control. With the promise of eSIM technology, enterprises can scale their IoT deployments globally, reduce total ownership and business process management costs, and reduce time to market.

This creates great hype, especially when you have device makers such as Apple, Microsoft, and Google including eSIM as a standard feature in their new devices.

I sense a “BUT” here. Always there seems to be a BUT in the works. So what is the big BUT surrounding eSIM development?

Lando: However, when companies look deeper into implementing eSIM technology, they realize there are two standards: consumer and machine-to-machine (M2M). They are not sure which standard to use and often realize the implementation of eSIM technology is not as simple for their IoT devices as it is for smartphones, laptops, and tablets.

So, there are a lot of discussions around the two standards and their pros and cons, especially around M2M.

What are the drawbacks to standard SIMs?

Lando: For traditional SIM cards, carrier provisioning is done at the manufacturing level. They can host only one profile and are not reprogrammable. That is why you need a new SIM when switching cellular providers. This is not ideal for IoT deployments. Especially global ones.

Noam Lando, CEO and co-founder Webbing
Noam Lando, CEO at Webbing

Once the SIM has been implemented, you have vendor lock-in. With thousands and even millions of devices in an IoT deployment, it is impractical to change SIM cards when you want to change wireless carriers. It requires a site visit, and the card may be physically difficult to access.

In addition, issues surround complying with the global trend to enforce regulatory requirements on communication services and data management. These include restrictions on data leaving the country and global enterprises needing localized deployments with local wireless carriers.

This requires warehousing, managing, and deploying a number of wireless carrier-specific product SKUs which drive up production and logistics costs.

The attraction to eSIMs seems obvious. What are the main benefits?

Lando: eSIM technology offers a robust, scalable solution to the limitations of the traditional SIM. What makes an eSIM unique is the technological advancements made to the UICC, the software of the SIM, which is now called the eUICC.

That new technology follows a new standard developed by the GSMA. It is remotely programmable and reprogrammable, can host multiple cellular carrier subscriptions, and makes the selection, contracting, and onboarding of cellular providers easier with over-the-air (OTA) provisioning.

I sense another BUT in the works here. What are the unresolved issues with eSIM replacements?

Lando: Consumer and M2M are implemented differently. The consumer standard targets consumer devices like mobile phones, tablets and laptops, wearables, and other IoT devices with an end-user interactive environment. It is secure by design, can host multiple wireless carrier profiles, and facilitates carrier swaps. However, it is designed for private consumer use.

How suitable for other uses are eSIMs?

Lando: The M2M standard targets industrial M2M and IoT devices such as cars, water meters, trackers, smart factories, and other components used in an industrial, non-end-user interactive environment.

The M2M eSIM standard is also secure by design. It facilitates carrier migration and, in theory, offers remote centralized management and provisioning of carrier profiles. However, it isn’t as cut and dry as it seems.

That said, why is upgrading not so promising yet?

Lando: M2M eSIM implementation is cumbersome, time-intensive, and has long capital investment cycles. It requires collaboration between the enterprise, eSIM manufacturers, and the wireless carrier throughout the manufacturing process for implementation.

What are the biggest misconceptions about eSIMs for IoT?

Lando: The biggest misconception about eSIM for IoT is that the benefits it provides to consumer devices can be applied to IoT. Enterprises quickly realize they must implement a different standard for IoT/M2M, which requires an SM-DP (Subscription Manager – Data Preparation) and SM-SR (Subscription Manager – Secure Routing) to provision and remotely manage carrier subscriptions. The M2M standard is cumbersome, requiring a substantial investment of funds and time to orchestrate the implementation of wireless carriers.

Where do you see the battle between competing standards headed?

Lando: When looking at mobile data connectivity, there is no major difference between M2M and IoT device needs when it comes to Remote SIM Provisioning. If anything, the benefits of eSIM (eUICC) technology are greater for M2M devices since they usually have a longer life cycle, and the demand for changing a carrier at some point is high.

This could be for commercial or technical reasons. Therefore, M2M devices are also likely to get eSIMs instead of standard SIMs.

Developers favor eSIMs to solve IoT and embedded firmware patch issues. eSIM hardware and eUICC components are certified according to the GSMA’s Security Accreditation Scheme (SAS). This guarantees a very high level of security. Furthermore, cellular connectivity is secure by design: data is encrypted, and users are securely identified.

What are the most critical problems facing IoT and embedded technologies?

Lando: The most critical problem facing IoT deployments is carrier lock-in and dealing with different global regulatory requirements. In such cases, enterprises need local deployments and local wireless carriers. Enterprises with global deployment need the flexibility to change carriers easily and efficiently to meet local regulations.

Why are companies not proactively adopting eSIM technology?

Lando: From our experience, companies want the promise of eSIM technology, but the current ecosystem fails to provide it. The two eSIM standards disregard enterprises’ need to manage their fleet of devices.

On one hand, enterprise-based devices such as mobile phones, laptops, tablets, scanners, and the like fall under the consumer standard. So companies don’t have full control over the installation and management of carrier profiles with centralized eSIM management. The consumer standard requires the end-user with the device in their hand to consent to install carrier profiles.

Meanwhile, the M2M standard for IoT deployments are cumbersome. They require a substantial investment of funds and time to orchestrate the implementation of wireless carriers.

It also limits customer choice due to a complicated implementation to switch between carriers.

This is part of the reason we developed WebbingCTRL, an eSIM, with a management platform, that can easily and remotely be configured as any wireless carrier’s profile, paving the way for the adoption of eSIM technology in the IoT space.

Jack M. Germain

Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open-source technologies. He is an esteemed reviewer of Linux distros and other open-source software. In addition, Jack extensively covers business technology and privacy issues, as well as developments in e-commerce and consumer electronics. Email Jack.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Related Stories
More by Jack M. Germain
More in Internet of Things