Misinformation: Can it Be Stopped?
By Stefan Patch
Somewhere in the middle of The Social Dilemma, where it was being discussed that these algorithms used by social media try to recommend rabbit holes, I wondered, why do these companies not try to recommend rabbit holes that lead towards a conclusion that is desired by the company? Why not try to skew the collective opinion to be more in your way? Honestly, from this documentary alone, that question is still unanswered, but perhaps the answer is just as simple as "It makes ever so slightly more money to divide everyone than try to unite us, even if where they become united is where we would like everyone to be".
This question led me to think about potential ways that these algorithms could be changed to be less hostile against us, and by the end of the documentary, I found myself wanting politicians, and not just a subset, but all of them, to work together with people knowledgeable in the field of technology to come up with a modern set of internet and social media laws. Laws that tried not to "patch issues", or lock down everything, but bring back some inspiration for technology to go back to something that has human interest at heart again. At some point it was mentioned how there are strict advertising regulations on what you can advertise to kids on TV during Saturday morning cartoons, and I just don't see how that shouldn't be the case for a kid's Saturday morning YouTube videos. It's just this generations version of the same entertainment. Those ads, being served to kids, just need to be the ones that would be okay according to the regulations that already exist about what is and is not approved. This is a perfect example of an area where the regulations that could be made would be extremely straight forward.
If we move on to one of the larger points, portrayed by the fictional story of the family, where the older sister is anti-phone, there should likely be some regulations regarding misinformation. One may ask, 'how would it be possible to check everything online for accuracy'? I would then ask, in response, how is it possible that YouTube knows which video you want to watch right now? How is it possible that Facebook knows that you are more likely to be interested in the content of a Flat Earth Group? When one of the women (who's name I did not catch) suggested that 'Google can't figure out if something is the truth or not', I think that is entirely unreasonable. I think that these companies just haven't had a reason to look into this yet from a financial standpoint, and adding some interventions like this, prioritizing correct information, would just take some time to get right like it did for the recommendation algorithms in the first place. Regulations to this effect, where when making an AI that 'recommends things', for example, it must have some form of 'care for the users', would be a great addition, but I can definitely see it being more challenging to write in a way that is not easily avoided or misses the mark, in contrast to the previous kids ads example.
I can see clearly at this point that I have too many thoughts in this area to fit here, but overall I'd say I can definitely see based on the examples in the documentary that there are areas that need improvement, but I'm confident that those areas can be improved upon without the need to burn everything down first.