Rewards for Recycling: Q&A With Gazelle CEO Israel Ganot, Part 2

Rewards for Recycling: Q&A With Gazelle CEO Israel Ganot, Part 1

In the fast-moving world of consumer electronics, last year’s gaming system and smartphone are old news. Luckily, however, they’re not entirely worthless.

Electronics recommerce company Gazelle buys this equipment, offering consumers cash, as well as free packaging and shipping — and then resells it for a profit.

In Part 2 of his exclusive interview with the E-Commerce Times, Gazelle cofounder and CEO Israel Ganot talks about the company’s electronic recycling philosophy, business model and future.

E-Commerce Times: What exactly does “recommerce” mean?

Israel Ganot:

Recommerce is a new consumer model that rewards consumers for smart consumption.

ECT: Where did the name “Gazelle” come from?


We wanted something short and memorable. It’s the second fastest animal, and it has a connection to the environment. We also like the slogan, “Don’t just sell it, Gazelle it.” The tagline we’re using now is “Keep it moving.” Don’t let things get stale, give those devices a new life. It’s all about keeping things moving.

ECT: How does Gazelle work?


Most consumers engage with Gazelle when they upgrade to a new device. You’ll go to Gazelle, search for your specific model, answer questions about its physical condition, and we’ll make you an offer. If you accept the offer, we’ll send you packaging, we’ll pay for the shipping, and once we get the item in our facility, within a week the item will be received inspected, data wiped, and you’ll get paid with a check or through PayPal or an Amazon gift card — or you can donate the proceeds to a charity.

Another way you can engage is to go to a site like Walmart.com and go to their electronics trade-in and recycling, powered by Gazelle, and do it that way. If you go to Gazelle through the Walmart website, you’ll get a Walmart gift card.

ECT: What is Gazelle’s business model? How does it make money?


Once we collect these devices, we pay consumers, and that’s our inventory. The only difference between us is that a typical retailer buys from wholesalers, but we put together our inventory from consumers, and then we resell the product in the secondary market.

We sell in a lot of different places. Our biggest channels are eBay, Amazon, wholesale channels and international buyers. Demand for the product is insatiable. People that live on the coasts want the latest and the greatest products. We buy on the coasts, and then we sell to the middle of the country and to developing markets.

Our biggest challenge as a business is buying more inventory, and to buy more inventory it’s all about educating consumers about recommerce.

ECT: What’s the benefit of using Gazelle as opposed to other similar services?


We are seeing a lot of competition in terms of other companies providing trade-in services, mostly coming from retailers and e-commerce retailers. It’s extremely positive, since the biggest challenge in our business is awareness.

Ultimately, it will help change consumer behavior. What’s different about Gazelle is our customer experience, which is the best in the industry. It’s all about delivering the customer experience every single day to every customer: how we handle the communication, free shipping, free packaging, and the way we communicate with our users. We see our users coming back to us over and over again.

ECT: What role has social media played in promoting and growing Gazelle?


The primary way we use social media is for customer care, mostly using Twitter and Facebook. We also give our users the tools they need to evangelize, and we’ve seen a lot of tweets about Gazelle. We’ve also seen the growth of YouTube videos. Gadget Lab, etc., providing videos for fixing devices, etc.

ECT: Are there any safety or privacy concerns with selling electronics on Gazelle?


That’s one of the most important services we provide — wiping electronics that customers trade. When consumers send their items to Gazelle, they know their data is safe. Consumers rely on that. It’s part of our brand and our trust. It’s about doing it every single day. We know that every data breach would affect our brand.

ECT: How is Gazelle evolving? What’s in the future for the company?


There are two areas in which we’re going to invest. In terms of sheer growth, we’re starting to invest in mass media channels, with radio and TV ads, and really getting the message out is a major area of opportunity.

Number two is working with our retail partners in bringing the service into the retail environment, and we’ll be rolling out more retail partners over the next year. We’re also building the infrastructure of the company — something that needs to scale with the growth of the business. We’re also thinking about international expansion and other categories we can move into.

Freelance writer Vivian Wagner has wide-ranging interests, from technology and business to music and motorcycles. She writes features regularly for ECT News Network, and her work has also appeared in American Profile, Bluegrass Unlimited, and many other publications. For more about her, visit her website.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Related Stories
More by Vivian Wagner
More in Exclusives

TechNewsWorld Channels


Data Observability’s Big Challenge: Build Trust at Scale

The cost of cleaning data is often beyond the comfort zone of businesses swamped with potentially dirty data. That clogs the pathways to trustworthy and compliant corporate data flow.

Few companies have the resources needed to develop tools for challenges like data observability at scale, according to Kyle Kirwan, co-founder and CEO of data observability platform Bigeye. As a result, many companies are essentially flying blind, reacting when something goes wrong rather than proactively addressing data quality.

Data trust provides a legal framework for managing shared data. It promotes collaboration through common rules for data security, privacy, and confidentiality; and enables organizations to securely connect their data sources in a shared repository of data.

Bigeye brings data engineers, analysts, scientists, and stakeholders together to build trust in data. Its platform helps companies automate monitoring and anomaly detection and create SLAs to ensure data quality and reliable pipelines.

With complete API access, a user-friendly interface, and automated yet flexible customization, data teams can monitor quality, proactively detect and resolve issues, and ensure that every user can rely on the data.

Uber Data Experience

Two early members of the data team at Uber — Kirwan and Bigeye Co-founder and CTO Egor Gryaznov — set out to use what they learned building Uber’s scale to create easier-to-deploy SaaS tools for data engineers.

Kirwan was one of Uber’s first data scientists and the first metadata product manager. Gryaznov was a staff-level engineer who managed Uber’s Vertica data warehouse and developed several internal data engineering tools and frameworks.

They realized the tools their teams were building to manage Uber’s massive data lake and thousands of internal data users were far ahead of what was available to most data engineering teams.

Automatically monitoring and detecting reliability issues within thousands of tables in data warehouses is no easy task. Companies like Instacart, Udacity, Docker, and Clubhouse use Bigeye to keep their analytics and machine learning working continually.

A Growing Field

Founding Bigeye in 2019, they recognized the growing problem enterprises face in deploying data into high-ROI use cases like operations workflows, machine learning-powered products and services, and strategic analytics and business intelligence-driven decision making.

The data observability space saw a number of entrants in 2021. Bigeye separated itself from that pack by providing users the ability to automatically assess customer data quality with more than 70 unique data quality metrics.

These metrics are trained with thousands of separate anomaly detection models to ensure data quality problems — even the hardest to detect — never make it past the data engineers.

Last year, data observability burst onto the scene with no less than ten data observability startups announcing significant funding rounds.

This year, data observability will become a priority for data teams as they seek to balance the demand of managing complex platforms with the need to ensure data quality and pipeline reliability, Kirwan predicted.

Solution Rundown

Bigeye’s data platform is no longer in beta. Some enterprise-grade features are still on the roadmap, like complete role-based access control. But others, like SSO and in-VPC deployments are available today.

The app is closed source, and so are the proprietary models used for anomaly detection. Bigeye is a big fan of open-source options but decided to develop its own to achieve the performance goals internally set.

Machine learning is used in a few key places to bring a unique blend of metrics to each table in a customer’s connected data sources. The anomaly detection models are trained on each of those metrics to detect abnormal behavior.

Three features built-in at the end of 2021 automatically detect and alert on data quality issues and enable data quality SLAs.

The first, Deltas, makes it easy to compare and validate multiple versions of any dataset.

Issues, the second, bring multiple alerts together into a single timeline with valuable context about related issues. This makes it simpler to document past fixes and speed up resolutions.

The third, Dashboard, provides an overall view of the health of the data, helping to identify data quality hotspots, close gaps in monitoring coverage, and quantify a team’s improvements to reliability.

Eyeballing Data Warehouses

TechNewsWorld spoke with Kirwan to demystify some of the complexities his company’s data sniffing platform offers data scientists.

TechNewsWorld: What makes Bigeye’s approach innovative or cutting edge?

Kyle Kirwan
Bigeye Co-founder and CEO
Kyle Kirwan, co-founder and CEO of Bigeye

Kyle Kirwan: Data observability requires constant and complete knowledge of what is happening inside all the tables and pipelines in your data stack. It is similar to what SRE [site reliability engineering] and DevOps teams use to keep applications and infrastructure working around the clock. But it is reimagined for the world of data engineering and data science.

While data quality and data reliability have been an issue for decades, data applications are now critical to how many leading businesses run; because any loss of data, outage, or degradation can quickly result in lost revenue and customers.

Without data observability, data dealers must constantly react to data quality issues and have to wrangle the data as they go to use it. A better solution is identifying the issues proactively and fixing the root causes.

How does trust impact the data?

Kirwan: Often, problems are discovered by stakeholders like executives who do not trust their often-broken dashboard. Or users get confusing results from in-product machine learning models. The data engineers can better get ahead of the problems and prevent business impact if they are alerted early enough.

How is this concept different from similar-sounding technologies such as unified data management?

Kirwan: Data observability is one core function within data operations (think: data management). Many customers look for best-of-breed solutions for each of the functions within data operations. This is why technologies like Snowflake, Fivetran, Airflow, and dbt have been exploding in popularity. Each is considered an important part of “the modern data stack” rather than a one-size-fits-none solution.

Data observability, data SLAs, ETL [extract, transform, load] code version control, data pipeline testing, and other techniques should be used in tandem to keep modern data pipelines all working smoothly. Just like high-performance software engineers and DevOps teams use their sister techniques.

What role do data pipeline and DataOps play with data visibility?

Kirwan: Data observability is closely related to DataOps and the emerging practice of data reliability engineering. DataOps refers to the broader set of all operational challenges that data platform owners will face. Data reliability engineering is a part of data ops, but only a part, just as site reliability engineering is related to, but does not encompass all of DevOps.

Data observability could have benefits to data security, as it could be used to identify unexpected changes in query volume on different tables or changes in behavior to ETL pipelines. However, data observability would not likely be a complete data security solution on its own.

What challenges does this technology face?

Kirwan: These challenges cover problems like data discovery and governance, cost tracking and management, and access controls. It also covers how to manage an ever-growing number of queries, dashboards, and ML features and models.

Reliability and uptime are certainly challenges for which many DevOps teams are responsible. But they are often also charged with other aspects like developer velocity and security considerations. Within these two areas, data observability enables data teams to know whether their data and data pipelines are error-free.

What are the challenges of implementing and maintaining data observability technology?

Kirwan: Effective data observability systems should integrate into the workflows of the data team. This enables them to focus on growing their data platforms rather than constantly reacting to data issues and putting out data fires. A poorly tuned data observability system, however, can result in a deluge of false positives.

An effective data system should also take much of the maintenance out of testing for data quality issues by automatically adapting to changes in the business. A poorly optimized data observability system, however, may not correct for changes in the business or overcorrect for changes in the business, requiring manual tuning, which can be time-consuming.

Data observability can also be taxing on the data warehouse if not optimized properly. The Bigeye teams have experience optimizing data observability at scale to ensure that the platform does not impact data warehouse performance.

Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open-source technologies. He is an esteemed reviewer of Linux distros and other open-source software. In addition, Jack extensively covers business technology and privacy issues, as well as developments in e-commerce and consumer electronics. Email Jack.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Related Stories