“At this point, they’re responsible for making decisions about just about every aspect of our life,” said Chris Gilliard, visiting scholar at the Shorenstein Center on Media, Politics and Public Policy at Harvard Kennedy School.
However, how algorithms work and the conclusions they reach can be mysterious, especially as the use of artificial intelligence techniques makes them increasingly complex. Their results are not always understood or accurate – and the consequences can be dire. And the impact of potential new legislation to limit the influence of algorithms on our lives remains uncertain.
Basically, an algorithm is a series of instructions. As Sasha Luccioni, a researcher in the AI ethics team at AI model builder Hugging Face pointed out, this can be hard-coded, with fixed instructions for a computer to follow, for example to put a list of names in alphabetical order. Simple algorithms have been used for computerized decision making for decades.
Today, algorithms help simplify otherwise complicated processes all the time, whether we know it or not. When you ask a clothing website to filter pajamas to see the most popular or cheapest options, you’re basically using an algorithm to say, “Hey, Old Navy, follow the steps to show me the cheapest pajamas. “.
All kinds of things can be algorithms, and they’re not just limited to computers: a recipe, for example, is kind of an algorithm, just like the morning routine of the week that you go through while sleeping before you leave the house.
“We use our own personal algorithms every day,” said Jevan Hutson, privacy and data security attorney at Seattle-based Hintze Law, who has studied AI and surveillance.
These models can be incredibly complex. Facebook, Instagram, and Twitter use them to personalize user feeds based on each person’s interests and past activity. Models can also be based on mounds of data collected over many years that no human could sort through. Zillow, for example, has been using its trademark, machine-learning-assisted “Zestimate” to estimate home values since 2006, taking into account tax and property records, owner-submitted details such as adding a room. bathroom and photos of a lodge.
The risks of relying on algorithms
As Zillow’s case shows, however, offloading decision-making to algorithmic systems can also go horribly wrong, and it’s not always clear why.
Elsewhere online, Meta, the company formerly known as Facebook, has come under scrutiny for tweaking its algorithms in a way that has helped spur more negative content on the biggest social network. in the world.
There is often little more than a basic explanation from tech companies about how their algorithmic systems work and what they are used for. Beyond that, experts in tech and tech law have told CNN Business that even those who build these systems don’t always know why they come to their conclusions – which is why they’re often referred to as “black boxes.” “.
“Computer scientists, data scientists, at this present stage, they seem like wizards to a lot of people because we don’t understand what they’re doing,” said Gilliard. “And we think they always do, and they don’t always.”
Pop filter bubbles
The United States does not have federal rules on how companies can or cannot use algorithms in general, or those that exploit AI in particular. (Some states and cities have adopted their own rules, which tend to deal with facial recognition software or biometrics more generally.)
Facebook, for example, already has it, although users are effectively discouraged from activating the so-called switch permanently. A fairly well-hidden “Most Recent” button will show you posts in reverse chronological order, but your Facebook feed will revert to its original heavily moderated state once you exit the website or close the app. . Meta stopped offering such an option on Instagram, which it also owns, in 2016.
Hutson noted that while the Filter Bubble Transparency Act clearly focuses on major social platforms, it will inevitably affect others, such as Spotify and Netflix, which rely heavily on algorithm-based curation. If that passes, he said, it will “fundamentally change” the business model of companies that are built entirely around algorithmic curation – a feature he suspects many users appreciate in some contexts.
“It’s going to have an impact on organizations far beyond those in the limelight,” he said.
AI experts argue that the need for more transparency is critical on the part of companies creating and using algorithms. Luccioni believes laws for algorithmic transparency are needed before specific uses and applications of AI can be regulated.
“I see things changing, for sure, but there is a really frustrating mismatch between what AI is capable of and what it’s legislated for,” Luccioni said.