Tech companies are the second type of actor requiring scrutiny. ” Variations in resources and technical expertise among regulators create regulatory arbitrage that the regulated eagerly exploit. Reflecting on the first five years of the GDPR, Max Schrems, the chair of privacy watchdog NOYB, found authorities and courts reluctant to enforce the law, and companies free to take advantage: “It often feels like there is more energy spent in undermining the GDPR than in complying with it. The French data protection authority, for instance, proactively monitors for privacy violations and strictly sanctions companies that overstep in contrast, Bulgarian authorities monitor passively and are hesitant to act. My recent research found that across Europe, the rigor and methods of national privacy regulators tasked with enforcing the European Union’s GDPR vary greatly. First are regulators, who convert promising tech laws into enforcement practices but are often ill-equipped for their mission. Scrutiny is required for three types of actors. Otherwise, as history shows, tech policies will struggle to fulfill the intentions of their policymakers. For tech policies to work, those responsible for enforcement and compliance should be overseen and held to account. But this is rarely understood in the tech sector. Decades of research exploring policy implementation across diverse areas consistently shows how successful implementation allows policies to be adapted and involves crucial bargaining. Innovation scholars Gary Marchant and Wendell Wallach have called for the creation of government coordination committees to design policies for fast-moving technologies however, there is little discussion about how to create accountability when implementing tech policies. Despite plenty of privacy and cybersecurity laws enacted around the world, data breaches and privacy abuses are frequent and harmful.Įach proposal overlooks a crucial ingredient for success: the bits and bytes of how technology policy is implemented. Researchers found that advertising companies are using sophisticated practices such as “cookie respawning” with “browser fingerprinting” to dodge legal requirements and so maintain surveillance on users over time. They have also revealed how machine learning software deployed by police departments has led to wrongful arrests, violating citizens’ rights under the Fourth Amendment, which bans unreasonable searches and seizures. Investigative journalists and activists have uncovered how hospitals sent sensitive medical information to Facebook, potentially violating the Health Insurance Portability and Accountability Act, which regulates the use of patient health data. There are many examples of well-intentioned tech policies failing to protect society. Time and again, good-faith policies are thwarted by those who put policy into practice. ![]() I study the bridge between computer science and public policy, trying to understand how fast-moving information technology is governed. On the cybersecurity front, 36 US states enacted new legislation in 2021 alone to demand more protection and transparency.Īrticles in Nature, Wired, The Economist, and at the Brookings Institution have called for the creation of new crosscutting agencies to figure out how to handle new technologies and design policies that can prevent social harm from the latest developments, but each proposal overlooks a crucial ingredient for success: the bits and bytes of how technology policy is implemented. Several US states are about to enforce new GDPR-like rules, and a new federal privacy law is potentially on the horizon. The European Union has also been working recently to pass the Digital Services and Digital Markets Acts and increase liability and competition in online markets. ![]() The European Union invested almost seven years of intense work in crafting the General Data Protection Regulation (GDPR) text, aiming to protect EU citizens’ privacy. The frenzy of tech policymaking is not only over AI. Meanwhile, the European Union is working out the Artificial Intelligence Act to minimize what threats the application of machine learning might pose to privacy, security, and democratic values. Since 2015, AI has been discussed with growing frequency in congressional committees: the term was mentioned 73 times in committee reports produced in 2021–2022 by the House and Senate. In 2022 alone, nine AI-related US federal laws and 21 state-level laws were passed. Days later, the CEO of OpenAI, the company that developed ChatGPT, advised Congress to pass safety regulations over AI models. ![]() In May 2023, the leaders of the G7 nations called for “guardrails” to limit potential damage caused by artificial intelligence (AI). Instead, watchdogs must have tools to scrutinize how such policies are implemented, paving the road for digital accountability. ![]() Well-designed policies alone cannot prevent social harm from new technologies.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |