A Scammer Darkly | Colin Sholes

Category
Newsletter
Created time
Jan 1, 2024 02:59 PM
notes
Note: ASD will be off next week, returning the week after from a new home, explained below.

Substack

notion image
Starting off a newsletter talking about issues within the newsletterer community may be a little navel-gazey, but it’s a controversy that’s come to a head and is worth addressing:
Under pressure from critics who say Substack is profiting from newsletters that promote hate speech and racism, the company’s founders said Thursday that they would not ban Nazi symbols and extremist rhetoric from the platform.
Not great! The Atlantic found dozens of newsletters openly displaying Nazi imagery and white supremacist rhetoric, and the response from Substack has been to…defend their right to use a free newsletter platform to spread their hatred and profit off it.
It is being framed as a free speech debate, but it really isn’t. As anyone subjected to the endless rhetoric on social media knows by now, a newsletter company is not the government, and the First Amendment does not apply. Substack has no problem moderating and censoring other kinds of speech, as it does not allow adult content (porn) on its platform. Sex workers can’t use Substack, but Nazis can, and are tacitly supported and encouraged by the company’s leaders, even. Gross.
Two things are at play here - one is the VC-backed, ‘free’ nature of Substack, which has played a large part in its rise to dominate the digital newsletter space. When I was toying with the idea of starting a newsletter, Substack’s simple, no-cost tools made it easier to give it a whirl. This frictionless adoption is precisely their business model, because the founders and their investors are focused on growth and scale, not nickel and diming everyone who wants to use their platform.
The other, more insidious narrative at play is the fetid swamp of tech founder groupthink that weaves capitalism, libertarianism, and white male supremacy into an ideology that has become irresistible to many of the country’s rising entrepreneurs. Watching Elon Musk turn Twitter into Stormfront is not exactly discouraging the deplatforming of Nazis, as racist authoritarians are given bylines elsewhere in prominent mainstream publications. Quite a few tech bros seem to have the same toxic politics in part because they all talk to one another, and when your social circles go fash, it takes a certain strength of character to speak out, which Substack’s founders do not possess.
Nor is this the first time Substack has had criticism aimed its way for publishing vile bigotry - it has platformed anti-trans crusaders for years. It makes sense that, as Nazis are being mainstreamed by the GOP and the world’s richest man, companies with loose ethics and a strong dose of Founder Mindset would follow suit.
All this is to say I will be moving this newsletter off Substack in the coming weeks, giving it its own domain, and hopefully ending up in everyone’s inboxes with little disruption. This will cost me a modest amount of money, which is fine, but is also illustrative of a lot of what we talk about around here - platforms lure users in with a ‘free’ product, and in that way become the product. Maybe Substack will weather the storm of authors departing over its policies - a different group of its right-wing writers penned a letter in support of publishing Nazis - but for many of us, it’s worth paying for the right to have a say in what sort of stuff our work appears next to.

AI

There are two conflicting narratives in AI right now - it’s either about to learn geometry and become so powerful it takes over the planet and eradicates humanity, or it’s bad at pretty much every task it does, and a liar to boot.
There is evidence for the latter claim, with more arriving on a near daily basis. Google couldn’t even create a sizzle reel for its fancy new AI without heavy editing. The problem Google had, and which much of the AI hype papers over, is that it’s difficult to create chatbots that convincingly imitate human conversation, much less a human with superpowers. Which begs the question: do we need chatbots to become our friends? What problem are we solving for here?
We’ve talked about how Google’s search results are already becoming polluted with AI-generated nonsense. Decades ago, Google’s search algorithms seemed like magic to the Internet’s early denizens. You suddenly had what felt like an impossibly large amount of data at your fingertips. Finally, the computer was able to tell you something new and interesting, rather than passively accepting your commands. Then, once the ad people got their grubby mitts on it, Google became a dusty highway of sketchy billboards, gradually shoving the Internet’s treasured wisdom and weirdness further down below the fold to sell clicks.
Perhaps it is our desire to have the computer tell us new things that now drives publishers to team up with AI companies so their bots can deliver a version of the news:
Earlier today, OpenAI, the maker of ChatGPT, announced a partnership with the media conglomerate Axel Springer that seems to get us closer to an answer. Under the arrangement, ChatGPT will gain the capacity to present its users with “summaries of selected global news content” published by the news organizations in Axel Springer’s portfolio, which includes Politico and Business Insider.
You may not have realized that bots like ChatGPT are rolled out like standard software and therefore contain a snapshot of the Internet that ends on the day they’re built - ChatGPT’s current version contains no new scraped information since April. Deals like Springer could feed ‘trusted’ sources of data into the models, allowing them to act as virtual newsboys which doesn’t sound totally awful, assuming they report the stories truthfully, which is no guarantee. In exchange, Springer is being paid by OpenAI and providing access to its archives.
In a happy bit of coincidence, as I was writing this piece I received an alert (from Google news, natch) about a lawsuit the New York Times has filed against OpenAI and Microsoft. Battle lines are being drawn between media companies who see AI as an opportunity and others who see it as a blight.
As someone who obsessively reads news and news alerts, a chatbot that could accurately summarize the day’s events and highlight certain stories that fit my interests would indeed be a useful tool. Unfortunately, even the most powerful models have a long way to go before I’d trust any of them to accurately regurgitate a few articles.
And, again, what is going on with our obsession with talking computers? It feels a little like weird authoritarian fantasy to force a piece of software to tell you your ideas are good, or find things you’d find interesting like a captive librarian. Are we certain that if we did create a digital superintelligence it would want to listen to all of our bullshit? Maybe that’s why OpenAI has a team of engineers coming up with ways to control a hypothetical smart AI by putting dumber AIs in charge of it. Sure, why not.
While investors are pouring incredible amounts of money into building talking computers, scientists are finding useful applications for what we’re now calling AI. Unsurprisingly, fields of study with giant datasets can take advantage of systems with very powerful computing tools:
In only the past few months, AI has appeared to predict tropical storms with similar accuracy and much more speed than conventional models
[…]
Figuring out a single protein structure from a sequence of amino acids used to take years. But in 2022, DeepMind’s flagship scientific model, AlphaFold, found the most likely structure of almost every protein known to science—some 200 million of them.
Note: The nice folks at Business Insider wrote about the FCC closing the lead generator loophole, and quoted your humble newsletterer in the process.

Lawyers

notion image
One primary role of lawyers is to favorably interpret rules and laws to benefit their clients’ interests. If you are a criminal defense lawyer, your goal is to inject doubt into the proceedings, to convince a judge or jury that your client is not guilty. If you are a business lawyer negotiating a deal, your goal is to tilt the contracts and agreements as much in your client’s favor as possible. This is the active part of lawyering - arguing your case convincingly, to a positive outcome.
America’s system of laws is mostly written by lawyers, and interpreted by other lawyers, in the form of judges and prosecutors. Many members of Congress are also lawyers, and lawmakers will often hire lawyers on staff or consult with law firms when they are crafting legislation, because good legislation needs to stand up to legal scrutiny.
Given that the practice of law involves a lot of research, analysis, and argument, the profession necessarily attracts the sort of person who enjoys these pursuits. If you can tolerate retaining legal minutiae for a few years and take on a couple hundred thousand dollars in debt, you can earn a law degree and set yourself up for a lifetime of lawyering.
It’s indecorous but necessary to characterize some lawyers as petty, because they have to be, to be good at their jobs. Finding a tiny flaw in an argument, or an incorrect word in a legal contract, can be advantageous for a lawyer, and so they are, or become, petty and argumentative.
The problem with all of this is that, again, lawyers operate the machinery upon which rests our fragile democracy. Government lawyers ensure regulations are enforced - that our air and water are clean, our buildings and roads are safe, et cetera. Government prosecutors hold people and entities accountable. Government judges settle disputes between parties, and enforce criminal violations of the law.
Ideally you’d want the smartest, pettiest, most overconfident lawyers in those roles, but the perversities of government employment mean that failures are punished more harshly than successes are awarded. Until recently, the FTC was notoriously timid in bringing lawsuits against big firms who could hire expensive lawyers, because each loss in court not only looked bad for them, but potentially cooled future prosecution. The lawyers in charge of the other lawyers wouldn’t want to bring a series of losing cases against the lawyers defending the big corporations and rich defendants, the thinking went. This pattern exists across the government, from the DoJ taking years to bring cases against Trump to the SEC preferring settlements over trials. If it’s government lawyers versus high-paid, private lawyers, it’s the latter almost every time.
There is one type of lawyer that remains immune from these pressures - the federal judge. Appointed for life, they are supposed to be fair, and to prevent the scales from tipping too hard in one direction or another. Ideally. Their job is to adjudicate the law, to lubricate the complex machinery of government, and make sure it’s functioning properly. The intermediary group of lawyers keeping the other two at bay.
We know that’s mostly bullshit these days, right? Anyhow, there is a new ProPublica story on Clarence Thomas:
In early January 2000, Supreme Court Justice Clarence Thomas was at a five-star beach resort in Sea Island, Georgia, hundreds of thousands of dollars in debt.
After almost a decade on the court, Thomas had grown frustrated with his financial situation, according to friends. He had recently started raising his young grandnephew, and Thomas’ wife was soliciting advice on how to handle the new expenses. The month before, the justice had borrowed $267,000 from a friend to buy a high-end RV.
At the resort, Thomas gave a speech at an off-the-record conservative conference. He found himself seated next to a Republican member of Congress on the flight home. The two men talked, and the lawmaker left the conversation worried that Thomas might resign.
If you’ve read prior editions of this newsletter, you know what happened next - Thomas accepted millions of dollars’ worth of gifts over the next twenty years from a variety of billionaires and other random dudes he happened to run into at gas stations:
Thomas met Earl Dixon, the owner of a Florida pest control company, while getting his RV serviced outside Tampa in 2001, according to the Thomas biography “Supreme Discomfort.” The next year, Dixon gave Thomas $5,000 to put toward his grandnephew’s tuition. Thomas reported the payment in his annual disclosure filing.
Let’s get back to the lawyering bit, because this quote in the piece especially struck me:
George Priest, a Yale Law School professor who has vacationed with Thomas and Crow, told ProPublica he believes Crow’s generosity was not intended to influence Thomas’ views but rather to make his life more comfortable. “He views Thomas as a Supreme Court justice as having a limited salary,” Priest said. “So he provides benefits for him.”
This blanket dismissal of the concepts of bribery and corruption is such a perfect encapsulation of lawyerbrain I couldn’t have scripted it any better. It is a lawyer’s - or a professor at one of the nation’s most prestigious law schools - job to twist facts and reality to more closely resemble their own way of thinking. This is how you pervert the operation of government, in the open, on the record.
You can imagine George Priest in court, defending a Manhattan property developer who made a building inspector’s life ‘more comfortable’ by paying for renovations on his vacation home, or helping him with the down payment on his son’s fishing boat. City inspectors are so underpaid, he might argue, and they’re such important civil servants and, your honor, my client’s building didn’t fall down so it was clearly up to code!
If we needed further evidence of the literal influence of two decades’ worth of largesse, Thomas abruptly changed his tune on whether Justices were underpaid, despite not having received a meaningful raise in the intervening years:
That June [2019], during a public appearance, Thomas was asked about salaries at the court. “Oh goodness, I think it’s plenty,” Thomas responded. “My wife and I are doing fine. We don’t live extravagantly, but we are fine.”
A few weeks later, Thomas boarded Crow’s private jet to head to Indonesia. He and his wife were off on vacation, an island cruise on Crow’s 162-foot yacht.
Thomas complained to anyone who would listen that he wanted more money, and when Congress couldn’t pass raises or lift the speaking fee ban on Justices, conservatives found another way to keep him comfortable. If any non-Supreme Court judge in this country accepted a single trip or loan or cash donation from a stranger, both they and the donor would be immediately indicted. But, when you’ve made it to nation’s highest tier of lawyer, rules quite literally no longer apply to you. There is something perverse about a system created and run by lawyers carving a little perch way up at the top where all the rules they’ve spent so much time meticulously debating no longer apply. Playing the lawyer game long enough to level all the way up and discovering that, if you’re lucky, you unlock the secret dungeon where crime is legal.
Circling back, the same people who feel comfortable openly soliciting bribes in public are the ones who now apply their flexible ethical standards to interpreting the nation’s laws. Is it a surprise that in a few short years the Court has dramatically rewritten the country’s charter? Racism is over, abortion access is decided by state legislators, federal agencies can’t exercise the discretion clearly given to them by Congress, bribery is cool now, the list goes on.

Crime

notion image
Americans are obsessed with crime. They see it everywhere (other than, ironically, where they live) and it gets wielded as a cudgel in many local elections. ‘Tough on crime’ mayors, district attorneys, judges, and police chiefs remain popular despite scant evidence their policies make anyone safer. Two thirds of the country think crime is a serious problem, and three quarters think this year is worse than last. The media helps shape this narrative, influencing public policy by splashing horrific crime stories on its front pages.
There was an increase in violent crime between 2020 and 2022, when social infrastructure designed to reduce violence was ripped from vulnerable communities. The pandemic, social justice protests, the decay of social institutions, all these things contributed to a surge in murders across the country.
So it is welcome news that, on the heels of the alarming Gallup sentiment poll, the actual data shows something quite different - a record decrease in violent crime, and crime more generally:
Murder plummeted in the United States in 2023, likely at one of the fastest rates of decline ever recorded. What’s more, every type of Uniform Crime Report Part I crime with the exception of auto theft is likely down a considerable amount this year relative to last year according to newly reported data through September from the FBI.
Murder is down more than 12% year-to-date, in three quarters of metros sampled. This is good news for cities widely perceived to be struggling with violence:
Detroit is on pace to have the fewest murders since 1966 and Baltimore and St Louis are on pace for the fewest murders in each city in nearly a decade. Other cities that saw huge increases in murder between 2020 and 2022 like Milwaukee, New Orleans, and Houston are seeing sizable declines in 2023.
That said, American cities are still far more dangerous than comparable places in other wealthy nations, and we’ll still lose tens of thousands of people this year to gun violence, but the downward trend is promising.
Murder is blessedly rare compared to all other forms of crime - remember the completely fabricated retail crime wave? Other types of violent crime are down across the board, and so is every type of property crime other than car theft (sorry, Hyundai and Kia owners).
Percent Change in Crime by Population Through Q3 YoY
notion image
It is hard to put trends into perspective sometimes, since we’ve just experienced through two very violent years, but a decline of this size, if it continues and is confirmed by the FBI’s final data released next year, would be remarkable:
To put some of this in perspective, a 4 percent decline in the nation’s violent crime rate relative to 2022’s reported rate would lead to the lowest violent crime rate nationally since 1969.
[…]
The quarterly data through Q3 points to a 6 percent decline in property crime which — if realized — would lead to the lowest property crime rate since 1961.
Despite the ongoing panic, and what’s sure to be a 2024 election full of false claims about America’s dangerous cities and skyrocketing crime, reality is quite different. We may be entering a period of relative safety and prosperity - though, as usual, much of that will be enjoyed by the well-off in our society. Overdose deaths, gun deaths, and traffic deaths are all set to break grim records (again) in 2023. Mass shootings are a daily problem in a society that has given up on regulating guns. By global standards, America is still a very dangerous place to live, though more and more of that danger comes via the firearm, pharmaceutical, and motor vehicle industries, rather than the imagined violence of a random stranger.

Unicorns

The term unicorn was coined in 2013 to describe tech startups with valuations of a billion dollars or more. Back then, thirty-nine companies met the criteria. Last year, over eleven hundred companies around the world were considered unicorns. A trillion dollars’ worth of startup value does seem like an awful lot, doesn’t it?
There is an important distinction between funds raised, revenues, and valuations as it pertains to tech. Investors use sleight-of-hand to mint new unicorns, investing smaller amounts of money at increased share prices to catapult a company into the ‘billions’. If a company sells ten percent of itself to a VC for $100 million, it’s valued at a billion dollars, even if it hasn’t earned a dime. In fact, many startups that raise large sums don’t make much money at all, which is a problem if the unicorn factory ever seeks to recoup its investments.
The unicorn is symptomatic of a larger problem in venture investing - making lots of bets with piles of money hoping you’ll hit one or two deals that pay off a thousand times over. Which means a lot (like, nearly all) of your bets will probably fail. If you had to guess at how many VC-backed companies (unicorns and otherwise) went out of business this year in the US alone, what would you say? A hundred? Five hundred? The answer is actually a lot higher:
But approximately 3,200 private venture-backed U.S. companies have gone out of business this year, according to data compiled for The New York Times by PitchBook, which tracks start-ups. Those companies had raised $27.2 billion in venture funding.
Impressive! Now, to be fair, eleven billion was WeWork, but that’s still a lot of money wasted on companies that have shut their doors this year in this country. The names of the corpses may sound made-up to anyone outside the bubble: Olive AI, Hopin, Veev, Zeus Living, Plastiq, Dayslice, Pebble. So many dreams of…whatever it is those companies were doing, left unrealized.
In the 2010s, low interest rates and a few high profile VC-backed hits (Google, Facebook) changed venture capital from a niche industry populated by a few big players into a global financial phenomenon everyone wanted in on:
From 2012 to 2022, investment in private U.S. start-ups ballooned eightfold to $344 billion.
One problem the startups who came in later in the game faced was that Silicon Valley’s ‘growth at all costs’ strategy meant the first mover tech behemoths absorbed so many of their peers it became difficult to get traction with new or innovative ideas. There couldn’t be any new Facebooks or Googles because Facebook or Google bought or forced them out of business. Still, VCs could play musical chairs for years, buying and selling stakes in their friends’ investments to pump valuations to the point a videoconferencing company you’ve never heard of (Hopin) was worth $8 billion dollars.
Sadly, the music has ended for many of these funds. No one is shedding tears for the fleece-vested VCs with vague titles like ‘partner’ or ‘principal’, but it’s worth sparing a thought for where else this year’s $27 billion in losses could have been spent, as our country’s school systems, day cares, and public health services veer closer to insolvency and collapse.
Is it the responsibility of private finance to make up for our government’s refusal to properly fund services? Of course not, though it might help if the wealthiest people in finance paid their taxes. Our government cannot force investors to put their money towards projects that benefit society instead of apps to streamline business payments. But it is remarkable that an industry can burn tens of billions’ on risky bets, vaporize thousands of jobs and upend entire industries for no reason, and it gets written up in the tech press as a footnote, rather than proof of a giant wealth transfer system that produces nothing of real value.

Xponential

If you don’t want to go the VC route, another way to make money in business is to get your customers to pay you for the right to help you run your business. I’m talking, of course, about the franchise model, in which franchisees buy into a brand and become sort-of entrepreneurs within it.

AI

notion image
As if we weren’t swimming in enough breathlessly credulous coverage of AI writ large, industry darling Sam Altman got himself mixed up in a Succession-style power struggle last month. Apparently, one of the ‘key developments’ that led to OpenAI’s non-profit board briefly showing Altman the door was a letter from company researchers claiming they’d made a ‘powerful’ discovery:
Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters.
First off, can we not name multiple artificial intelligence projects after wacko conspiracy movements? Letter choices aside, what did this amazing Qbot do?
Given vast computing resources, the new model was able to solve certain mathematical problems…
Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.
That’s right, the discovery that shook the entire AI industry was OpenAI’s algorithms being able to solve grade-school math problems. If this seems underwhelming to you, it’s important to remember that as we have exhaustively detailed in these pages, current ‘AI’ is mostly good at putting combinations of words together in ways that sound convincing. Researchers believe math is different:
But conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence.
I mean, I guess? Don’t computers literally do math as, like, a core function of their programming? Am I going crazy? Coincidentally, as the OpenAI chaos was going on, actual scientists were publishing scathing reports on how bad the latest version of ChatGPT was at analyzing medical outcome data. In fact, the chatbot created an entire fake clinical trial dataset to support it’s wrong conclusions.
Should we, as a society, be panicking that AI might be smart enough to do algebra? Maybe. OpenAI’s board thought Altman was too concerned about rapid commercialization of its products while researchers were raising concerns the software was getting too smart. Another explanation is that much of the moral panic around AI seems to derive from the overactive imaginations of our tech illuminati.
In 2015, Elon Musk and Larry Page got into an argument:
Humans would eventually merge with artificially intelligent machines, [Page] said. One day there would be many kinds of intelligence competing for resources, and the best would win.
If that happens, Mr. Musk said, we’re doomed. The machines will destroy humanity.
It is a little funny that the guy who can’t run a website or build cars that avoid fire trucks thinks computers will grow smart enough to destroy us. Framed differently, however, it makes complete sense that Musk’s imagination conjures visions of AI that would immediately wipe out humans, because competitive annihilation is the default setting in Silicon Valley. Ted Chiang wrote a prescient piece on it half a decade ago:
The idea of superintelligence is such a poorly defined notion that one could envision it taking almost any form with equal justification: a benevolent genie that solves all the world’s problems, or a mathematician that spends all its time proving theorems so abstract that humans can’t even understand them. But when Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism.
An added layer of irony is the fact that all the people in the midst of the debate over AI safety are also poised to become rich off it:
The people who say they are most worried about A.I. are among the most determined to create it and enjoy its riches. They have justified their ambition with their strong belief that they alone can keep A.I. from endangering Earth.
Yes, I definitely trust uhhhhh, the founders of Google, Facebook, Palantir, and Microsoft to protect humanity from dangerous technology. They can’t even keep us safe on the Internet they created.
Nor are they even true believers - the embrace of AI is more about riding popular trends than any belief in the underlying tech:
Sundar Pichai was initially unimpressed with ChatGPT, given how much it got wrong. But OpenAI released the chatbot anyway and the public loved it — and he wondered whether Google also could release products that weren’t perfect.
[…]
Zuckerberg… had been obsessed with the metaverse. But Yann LeCun, his top A.I. scientist and a pioneer of the technology, warned that A.I.-powered assistants could make Meta’s platforms extinct.
Some of the richest, most powerful men on the planet are so easily swayed by people within their circles telling then ‘[X] will destroy our business’ they’ve created a resource-devouring arms race to make chatbots that suck slightly less than their competitors.
When gullible technocrats don’t happen to have a business under threat from AI, its proselytizers change tack, insisting computers will kill us all in a grimdark techno-future cribbed from sci-fi novels:
Mr. Musk explained that his plan was to colonize Mars to escape overpopulation and other dangers on Earth. Dr. Hassabis replied that the plan would work — so long as superintelligent machines didn’t follow and destroy humanity on Mars, too.
Mr. Musk was speechless. He hadn’t thought about that particular danger. Mr. Musk soon invested in DeepMind alongside Mr. Thiel so he could be closer to the creation of this technology.
The founder of what is now Google’s AI lab was literally going around Silicon Valley scaring the shit out of every billionaire he could snag a lunch date with, netting himself nine figures in a protracted bidding war. Normal stuff.
Despite the years and billions spent developing AI, we’re no closer to a race of evil computers taking over. It’s just a bunch of rich dudes gassing each other up over apocalyptic visions of the near future, playing the role of both villain and savior.

Spam

In lieu of superintelligence, the billions plowed into AI tech has been productive in at least one area. Turns out, if you feed chatbots immense datasets, you can create some really convincing spam:
The use case for AI is spam web pages filled with ads. Google considers LLM-based ad landing pages to be spam, but seems unable or unwilling to detect and penalize it.
The use case for AI is spam books on Amazon Kindle. Most are “free” Kindle Unlimited titles earning money through subscriber pageviews rather than outright purchases.
The use case for AI is spam news sites for ad revenue.
The use case for AI is spam phone calls for automated scamming — using AI to clone people’s voices.
The use case for AI is spam Amazon reviews and spam tweets.
AI is making every website you visit to search for or buy things much, much worse. The hardest part of selling anything is marketing it, but when you can ask a chatbot to write marketing material based on millions of websites it’s scraped, and it coughs up an approximation of human language, you can make up for quality with quantity. You used to need to hire a human with a modicum of critical thinking skills to write your ad copy, but now a free script can do it, and you can feed hundreds or thousands of pages full of chatbot sputum into Google or Amazon or Facebook and reap the rewards.
Here is a SEO spammer explaining exactly how he hijacked millions of Google search impressions from a competitor. A byproduct of AI companies having already turned their content scrapers loose across the Internet is that their chatbots are good at offering Stealing-as-a-Service to the wider public. You used to need someone with a programming background to scrape a website’s code and duplicate it, now a free AI service will do it for you, and use the output to refine its model in the process.

Lead Generation

notion image
Only seriously long-time readers of this newsletter will be aware that in a past life I worked in lead generation. My job, for a couple years, was to run ads to encourage people to fill out forms or call into call centers seeking quotes for health or car insurance, or other financial services.
This may sound suspiciously like regular old marketing, and it is! The difference between a company running ads to drive customers to its products or services and what I did was that my employer was a third-party, and we then sold those customers to the actual service providers, for a fee. If all went well, my advertising costs did not exceed our commission, and the company made money.
If you’re thinking ‘this sounds kind of like an advertising agency’ you’d be partially correct, but the crucial distinction is that we did not have exclusive relationships with buyers, and were not guaranteed a commission on every lead we generated. The idea behind this system was to allow buyers to pick and choose which leads they wanted, and to give sellers the opportunity to seek better prices among multiple buyers.
The de-risking of the lead generation ecosystem created massive auction-style marketplaces, with a variety of incentive structures and mechanisms to shuffle consumer data around. The most common model we worked with was a ‘ping-post’ system, which involved sending a truncated version of a customer’s data to an automated script that would either reject or ‘bid’ on the lead. Our system would ‘ping’ multiple buyers, saying “I’ve got a 54-year-old Male, from ZIP code 90210, with Brown Hair and a 650 Credit Score” and they might offer anywhere from one penny to some number of dollars for the lead. If the bid was accepted, our system would ‘post’ the full consumer data to the buyer.
Sometimes, leads would be considered ‘non-exclusive’ meaning they’d be posted to multiple buyers, typically for significantly less money. Alternatively, unscrupulous sellers could simply sell leads as many times as they wanted, because there was no centralized marketplace to monitor such behavior - there are quite a few companies offering ‘lead fraud’ prevention products, but this requires both buyer and seller to opt in, and represents a small percentage of the lead gen industry writ large.
For leads that couldn’t be sold for an up-front profit, or older ‘aged’ leads that hadn’t been sold recently, another option was to give them to companies, who would pay a commission for any sales they made off the data. Some were end clients (the companies who actually provided the good or services) and some were third-party marketing companies who specialized in getting in contact with leads to sell to upstream buyers.
I’ve simplified this explanation, but the end result is that millions of Americans fill out forms online each day, and what happens to their data is, at best, a crap shoot. What happens to their phones is much, much worse.
‘Time to Call’ is an industry metric for how quickly a new lead is contacted, and all research points to faster being better. If a lead is generated and sold in a matter of seconds, a consumer’s phone may ring within seconds or minutes of that invisible transaction.
If that call fails or goes to voicemail, automated ‘drip marketing’ campaigns kick into gear, queueing up future call and text attempts. Sophisticated dialing software packages choose from a variety of toll-free or local outbound numbers to improve answer rates, or schedule calls at certain times of day when people are more likely to answer. Some companies send a reminder or ‘warm-up’ text before a call.
What this means for the hapless human beings caught in this diabolical apparatus is any online interaction, any innocent request for details on a product or service can result in a days or weekslong barrage of unwanted texts and calls. From a consumer standpoint this is a nightmare, one that we’ve lived as Americans for decades, as our lawmakers mostly refused to take any meaningful action to regulate the lead generation industry.
In 1991, Congress passed the Telephone Consumer Protection Act, intended to combat unwanted telephone solicitations, in the days when anyone could find your home number in a phone book and bombard you during dinner time. Since then, the TCPA has been shoehorned into an enforcement tool against robocallers and others, though enforcement typically occurs via private lawsuits against lawbreakers, since violations carry a monetary value. The FCC is in charge of interpreting the TCPA, and for years consumer privacy groups begged the agency to issue stronger, clear guidance on exactly what lead generators are allowed to do with your information once it’s collected.
Well, there is finally some good news. The FCC has released a notice of proposed rulemaking and it is…very good! Specifically, it will close the ‘lead generator loophole’ which allowed companies who collect your data to bury language in the terms saying they could sell it to essentially whomever they wanted.
How did this work? In a standard TCPA disclosure on a marketing site, a consumer might see text saying they consent to be contacted by the company whose website they were currently on, and other ‘marketing partners’. In one infamous complaint against a company called Urth Access, the robocallers listed five thousand entities on their marketing partners page. At my old job we might have twenty or thirty companies on the page, any of the potential buyers for our leads.
The FCC has finally put its foot down, and is now requiring opt-in, one-to-one consent for each company who wishes to use automated systems to call or text someone who’s filled out a form online. This is a big deal for lead generators and brokers, who have coasted for years on loose consent and disclosure requirements.
Another hammer blow comes in the form of restricting said calls and texts to ‘topically related’ content - signing up for information on a car loan does not mean the same company can call you about other, unrelated financial products or services like loan refinancing or an extended warranty. Lead generators often sell or remarket lead lists to companies (‘marketing partners’) in different industries for additional revenue.
Lastly, the FCC is requiring that the companies calling and texting consumers have proof of consent for any data they’re using in their marketing systems. This helps break up the indemnity loops many lead generators had with their buyers; if a leadgen company insists they have proof of consent for all the data they’re selling, buyers can ignore compliance with the law, assuming they can dump any legal liability on the often fly-by-night brokers they buy it from. TCPA lawsuits don’t do much good if the company you’re trying to sue has no money, or has gone out of business.
Despite industry protestations, what this rule change will not do is impact normal businesses trying to generate leads online. TCPA does not cover ‘manually’ dialed outbound calls, like when your doctor’s office calls to remind you of an appointment or a car dealer returns a request for a quote. It preserves the right of whatever company the consumer signed up with to call and text them as much as they want (about the specific thing they requested) until the person opts out. What it does not allow that company to do is sell that information to partners, brokers, or give it to marketing companies to monetize when they’re done with it.
It is hard to overstate just how unregulated the lead generation industry has been for decades, and while these TCPA changes will have to be enforced mostly by private citizens and lawyers, the clear guidance makes it much easier for anyone to sue, and will therefore require wholesale changes to the industry lest its biggest beneficiaries - megacorps like Quicken Loans are huge lead buyers - get hit with massive class actions.