Posted by Kathleen T. Meagher.
“In mid-may 2023, the CEO of ChatGPT or OpenAI, Sam Altman, openly stated to Congress that the time has come to have regulators “start setting limits on powerful AI systems”. Due to the “significant harm to the world” these powerful AI systems can pose to human kind, both Altman and lawmakers alike have agreed that having the use of the government’s assistance and oversight will be “critical to mitigating the risks”. Due to AI’s immense rise to popularity and saturation within the social media market, “ the use of generative AI has exploded”. In the article, AI, specifically generative, AI, is considered to be a “Big Bang Disruptor”. This most simply defines it as “a new technology, from the moment of release, offers users and experience that is both better and cheaper than those with which it competes”. According to the Burbage used in this article, this rising rate of usage within an open AI systems can be both interpreted as positive and negative. Both authors define these systems as remarkable, limitless, and excitement-inducing. However, they go on to mention that the seemingly limitlessness of these programs provide potential issues like privacy, bias, and even national security, even going so far as to say that it is “reasonable for lawmakers to take notice” of this.
Both authors, Blair Levin and Larry Downes, highlight that the U.S. Congress is attempting to spearhead the regulation of AI. An example of this can be seen in the article when Chuck Schumer “calling for preemptive legislation to establish regulatory ‘guardrails’” on AI products and services is mentioned. Some of these guard, rails and tail, focusing on government, reporting, user transparency, and “aligning these systems with American values, and ensuring that AI developers deliver on their promise to create a better world”. Both Levin and Downes go on to foreshadow that the vagueness of this proposal lacks promise. Personally, pending on the ethics of every AI developer, this can go one of countless ways. I believe that solely having AI developers be considered the judge and jury for such a globally used and widespread product, cannot possibly be a sound choice. Alongside the U.S. Congress’s attempt at regulating AI, the Biden Administration believes that they are also in competition to implement a White House blueprint for an AI Bill of Rights. Similar to the “guard rails” of Congress, the White House’s “AI Bill of Rights” has a call to action for developers to ensure the neutrality of the systems in order to prevent privacy violations. In addition to this, the department of commerce, the National Telecommunications and Information Administration (NTIA) has “ opened an inquiry about the usefulness of audits and certifications for AI systems” well also requesting comments on dozens of questions regarding “accountability for AI systems, including Weather, when, how, and by whom new application should be assessed, certified, or audited, and what kind of criteria” should be included in that conversation. Both Levin and Downes mention that in addition, the federal trade commission has made claims that their agency already has jurisdiction over AI. I am in a grants when the federal trade commission chair, Lina Khan, states that AI could lead to the exacerbation of pre-existing issues in technology like “collusion, monopolization, mergers, price, discrimination, and unfair methods of competition”. In addition to this, risks of turbo charging fraud, committed intentionally or otherwise by AI, become elevated. Engaging the assistance of the United States courts, European Commission, or Congress, in the regulation of multiple avenues utilizing artificial intelligence pose questions for business owners. This inevitably becomes a larger question of the government’s involvement in regulating the operations of one’s businesses both in the U.S. and globally.
Although upsides can be hypothesized for AI’s relation to businesses, the line becomes muddied regarding what areas AI is cleared to have involvement in after the above mentioned laws are put into action. Some limit it to be solely in the health and medical fields, whereas it has been used in recent times in the hiring processes for certain fields. The issues that AI potentially poses for businesses include “misinformation, copyright, and trademark abuse” according to both Levin and Downes. According to the authors, the implementation of joint government actions to regulate AI is futile as they believe that law advances incrementally while technology evolves exponentially. I cannot say that I agree with this statement entirely as it is not entirely true. The reasoning for why technology has been allowed to evolve exponentially is due to the lack of regulation and implemented rules.”
Kathleen is a marketing major at the Stillman School of Business, Seton Hall University, Class of 2025.
Harvard Business Review Article Link:
“https://hbr.org/2023/05/who-is-going-to-regulate-ai Links to an external site.” “Government Policy And Regulation Links to an external site.‘Who Is Going to Regulate AI?’ by Blair Levin Links to an external site. and Larry Downes Links to an external site.”