Don’t feel like reading? Listen to this blog instead
The proliferation of predictive modeling has opened the door to transformative technological advancement, but there is still a lack of clarity around some of these tools. The resulting ambiguity around algorithms and their functionality has led to questions regarding ethics, and whether or not the ultimate intention is to improve the lives of everyone or just a select few.
More and more people are becoming aware of the existence of algorithms within their daily lives. From the videos being prioritized in YouTube feeds, to the manufacturing algorithms that inform real-time supply chain decisions, algorithms are popping up everywhere. Despite this, there are still questions regarding who should be held accountable for algorithmic decisions, the humans that program them, or the technology itself.
The traditional definition of an algorithm is a series of instructions that a computer runs in order to learn from input data. The information that is output is referred to as a model. As the computer uses this information to learn, it multiplies each input factor, however this process can be more complicated in certain instances.
Ultimately, the resulting effect of the algorithm depends on two primary factors, the data being fed and the context of the model’s deployment. These variables have led to some of the ambiguity around the net positive or negative surrounding algorithms, as one context can lead to a very different result than another.
While this might sound like a small issue, it can have really harmful results. In December of 2020, a distribution algorithm developed by Stanford Medical Center misallocated COVID-19 vaccines, seemingly favoring top-level administrators over the doctors on the front lines of the pandemic. Despite the apparent attempt to consult design ethicists, the algorithm functioned in a simple way that seemed to mirror a decision tree designed by the human developers.
This issue of ethics surrounding algorithms has caught the attention of lawmakers, as the Algorithmic Accountability Act (HR2291) was introduced in the US Congress in 2019. This is only one example of algorithms getting a legal spotlight in recent years, as disagreements around what counts as an algorithm are on the rise. Being able to apply a universal definition to the term will allow lawmakers and the public to assess these systems based on their real-world impact. The result will hopefully be the avoidance of potentially harmful algorithms, as we prioritize the impact and not the input data.
Algorithms are being deployed across advertising and marketing tools and platforms. These algorithms claim to help businesses ensure that they are getting in front of people that actually want the products or services being advertised. From Google’s search ad algorithms to Facebook’s news feed algorithms, Big Tech is doing everything it can to tailor their platforms to their user’s interests.
The concern here is that a user’s interests are not so easily definable, as humans are prone to interact with things they hate just as much as things they love. You would think that marketing personalization will benefit the end user, but we have found ourselves in a situation wherein personalization has potentially forced us further into our own bubbles, as algorithms fuel the spreading of viral information, despite the resulting impact.
Furthermore, businesses that depend on algorithms to improve advertising efforts are not always seeing the upside. For example, while Google’s algorithms can be quite effective, they can also lead to a loss of traffic, reduced sales, higher advertising costs, and a drop in ranking in the search engine. This forces businesses into a position where they themselves must become experts on the algorithm if they hope to take advantage of its benefits.
On a much deeper level, advertising algorithms can even lead to a consumer feeling exposed or unsafe. Targeted advertising is a powerful tool, and one that should be wielded responsibly. On Facebook, targeted advertising has become so widespread that the platform can potentially monitor your actions outside of the platform in order to inform the ads it serves you in your feed. Perhaps you purchased some new sneakers from your favorite footwear website while still logged into Facebook, there is a chance that you will start seeing related ads for shoelaces or matching socks within Facebook.
There are seemingly endless examples of how advertising platforms are using algorithms to improve their targeting, but what is important here is the discussion around the ethics of this behavior and what the overall impact is on our society. We as consumers should not be forced to partake in the dissolution of our own privacy, solely for the profit of Big Tech companies that are inflating their advertising revenue.
At ReverseAds, our mission is to break the chains of big tech and return the internet to the people. To achieve this, we understand the critical importance of providing privacy-first solutions that shield user data from the prying eyes of corporations looking to sell data off to the highest bidder. This is why we pair all of our algorithms with blockchain and distributed web technology.
Move your advertising in a privacy-first direction and incorporate algorithms that prioritize impact over input. Email firstname.lastname@example.org to find out more about our proprietary adtech algorithms.
© 2021 ReverseAds Inc. All rights reserved. Various trademarks held by their respective owners. 41/7, Rawai, Mueang Phuket, Phuket 83130