Is AI The New ‘Space Race?

Posted by Kathleen T. Meagher.

“In mid-may 2023, the CEO of ChatGPT or OpenAI, Sam Altman, openly stated to Congress that the time has come to have regulators “start setting limits on powerful AI systems”. Due to the “significant harm to the world” these powerful AI systems can pose to human kind, both Altman and lawmakers alike have agreed that having the use of the government’s assistance and oversight will be “critical to mitigating the risks”. Due to AI’s immense rise to popularity and saturation within the social media market, “ the use of generative AI has exploded”. In the article, AI, specifically generative, AI, is considered to be a “Big Bang Disruptor”. This most simply defines it as “a new technology, from the moment of release, offers users and experience that is both better and cheaper than those with which it competes”. According to the Burbage used in this article, this rising rate of usage within an open AI systems can be both interpreted as positive and negative. Both authors define these systems as remarkable, limitless, and excitement-inducing. However, they go on to mention that the seemingly limitlessness of these programs provide potential issues like privacy, bias, and even national security, even going so far as to say that it is “reasonable for lawmakers to take notice” of this.

Both authors, Blair Levin and Larry Downes, highlight that the U.S. Congress is attempting to spearhead the regulation of AI. An example of this can be seen in the article when Chuck Schumer “calling for preemptive legislation to establish regulatory ‘guardrails’” on AI products and services is mentioned. Some of these guard, rails and tail, focusing on government, reporting, user transparency, and “aligning these systems with American values, and ensuring that AI developers deliver on their promise to create a better world”. Both Levin and Downes go on to foreshadow that the vagueness of this proposal lacks promise. Personally, pending on the ethics of every AI developer, this can go one of countless ways. I believe that solely having AI developers be considered the judge and jury for such a globally used and widespread product, cannot possibly be a sound choice. Alongside the U.S. Congress’s attempt at regulating AI, the Biden Administration believes that they are also in competition to implement a White House blueprint for an AI Bill of Rights. Similar to the “guard rails” of Congress, the White House’s “AI Bill of Rights” has a call to action for developers to ensure the neutrality of the systems in order to prevent privacy violations. In addition to this, the department of commerce, the National Telecommunications and Information Administration (NTIA) has “ opened an inquiry about the usefulness of audits and certifications for AI systems” well also requesting comments on dozens of questions regarding “accountability for AI systems, including Weather, when, how, and by whom new application should be assessed, certified, or audited, and what kind of criteria” should be included in that conversation. Both Levin and Downes mention that in addition, the federal trade commission has made claims that their agency already has jurisdiction over AI. I am in a grants when the federal trade commission chair, Lina Khan, states that AI could lead to the exacerbation of pre-existing issues in technology like “collusion, monopolization, mergers, price, discrimination, and unfair methods of competition”. In addition to this, risks of turbo charging fraud, committed intentionally or otherwise by AI, become elevated. Engaging the assistance of the United States courts, European Commission, or Congress, in the regulation of multiple avenues utilizing artificial intelligence pose questions for business owners. This inevitably becomes a larger question of the government’s involvement in regulating the operations of one’s businesses both in the U.S. and globally.

Although upsides can be hypothesized for AI’s relation to businesses, the line becomes muddied regarding what areas AI is cleared to have involvement in after the above mentioned laws are put into action. Some limit it to be solely in the health and medical fields, whereas it has been used in recent times in the hiring processes for certain fields. The issues that AI potentially poses for businesses include “misinformation, copyright, and trademark abuse” according to both Levin and Downes. According to the authors, the implementation of joint government actions to regulate AI is futile as they believe that law advances incrementally while technology evolves exponentially. I cannot say that I agree with this statement entirely as it is not entirely true. The reasoning for why technology has been allowed to evolve exponentially is due to the lack of regulation and implemented rules.”

Kathleen is a marketing major at the Stillman School of Business, Seton Hall University, Class of 2025.

Harvard Business Review Article Link:

“https://hbr.org/2023/05/who-is-going-to-regulate-ai Links to an external site.” “Government Policy And Regulation Links to an external site.‘Who Is Going to Regulate AI?’ by Blair Levin Links to an external site. and Larry Downes Links to an external site.”

U.S. Supreme Court to Examine NRA’s Claims of Government Pressure on Insurers

Posted by Markus Hand.

The U.S. Supreme Court has agreed to review a case involving the National Rifle Association (NRA) and its claims that a former New York state official, Maria Vullo, unconstitutionally pressured insurance companies like Chubb Ltd. and Lloyd’s of London to cease doing business with the gun lobby. The case also names former New York Governor Andrew Cuomo as a defendant. The NRA alleges that the actions of these officials amounted to “blacklisting” and violated their free speech rights. This case will have significant implications for the extent to which government officials can use their positions to undermine the activities of advocacy groups. The Supreme Court is set to hear arguments and make a ruling by the end of June.

The dispute revolves around Vullo’s investigation of the NRA’s “Carry Guard” insurance program, which covers losses associated with the use of personal firearms, including criminal defense costs. The NRA claims that Vullo extended her targeting to other insurance products endorsed by the NRA and threatened insurance companies with investigations and penalties if they didn’t distance themselves from the gun rights organization. The NRA cites guideline letters and a press release in which Vullo urged insurers to consider the reputational risk of dealing with the NRA, particularly following the 2018 shooting at Marjory Stoneman Douglas High School in Parkland, Florida.

Vullo’s lawyers argue that she did not violate the First Amendment and was merely expressing her views regarding a national tragedy and encouraging regulated entities to evaluate their relationships with gun-promotion organizations. This case raises important questions about the limits of government officials’ authority in influencing the actions of private entities related to advocacy groups, and the Supreme Court’s decision will impact how such cases are handled in the future.

Markus is a sports management major at the Stillman School of Business, Seton Hall University, Class of 2024.

https://news.bloomberglaw.com/us-law-week/nra-gets-supreme-court-review-on-new-york-blacklisting-claimLinks to an external site.

The Bankman-Fried Case

Posted by Christine Han.

This recently published article, “Bankman-Fried’s Pre-Trial Antics Haunt Him Before Sentencing,” was written by Matthew Bultman, who takes a deep dive into the topic at hand. Sam Bankman-Fried has recently spent his time behind bars as he waited for conviction of defrauding FTX customers. Former federal prosecutors believe that Bankman-Fried will receive over 20 years in prison since this is a “high-profile white-collar case” like that of Elizabeth Holmes (who defrauded Theranos Inc. investors). Others suggest that, on the high end, 31-year-old Bankman-Fried can be subject to life in prison. In some cases, the judge could be more lenient, but, apparently, Bankman-Fried “did himself no favors by rankling the judge, both before and during the trial.”

According to District Judge Lewis A. Kaplan, Bankman-Fried has “‘shown a willingness and a desire to risk crossing the line in an effort to get right up to it, no matter where the line is.’” In the past, prosecutors had raised concerns about Bankman-Fried’s use of encrypted messaging to contact FTX’s US general counsel, which was flagged as an attempt at witness tampering. Bankman-Fried has continuously denied the accusation of defrauding FTX customers by depicting himself as inexperienced with running a business. Along with these actions, he has also lied on the witness stand.

I believe that, given the amount of money that was involved in this defrauding scheme, Bankman-Fried’s sentencing could be much longer. In fact, as the article mentioned, it is possible for the judge to reduce his sentencing, but it is evident that Bankman-Fried is not on his best behavior. In fact, the article mentions how prosecutors have compared this case to that of Bernie Madoff, “who in 2009 was sentenced to 150 years in prison for running a Ponzi scheme that lost billions of dollars.” In my opinion, if a sentence this long has occurred before for a case similar to Bankman-Fried, it is very possible for him to receive life in prison.

Christine is a mathmatetical finance and IT management double major at the Stillman School of Business, Seton Hall University, Class of 2026.

https://news.bloomberglaw.com/securities-law/bankman-frieds-pre-trial-antics-haunt-him-before-sentencing-1Links to an external site.

Fear of Election Chaos

Posted by David Gabriel.

The article discusses the growing number of lawsuits aiming to disqualify Donald Trump from running for the presidency again in 2024 due to his alleged efforts to overturn the 2020 election results. These cases primarily revolve around Section 3 of the 14th Amendment, which bars individuals who engaged in insurrection from holding public office. The legal battles pose several untested questions, such as whether courts can enforce disqualification, whether this measure applies to a president’s conduct, and whether the January 6, 2021, Capitol attack constitutes an “insurrection” and if Trump was involved in it.

Lawyers and judges involved in these cases anticipate that the US Supreme Court may ultimately need to resolve this issue. With state election officials finalizing ballots in early 2024, the Supreme Court might be forced to address this matter quickly. The cases have raised concerns about potential chaos, with some states allowing Trump on their ballots and others excluding him. The outcome of these legal battles remains uncertain, but they highlight the complexities of various election laws and court procedures in different states.

As of now, more cases are being filed across the country, and some have already been dismissed due to procedural defects. Trump and his campaign have had to prepare for these legal challenges well in advance of the upcoming elections, as the lawsuits continue to unfold in multiple states.

David is a sports management major at the Stillman School of Business, Class of 2026.

Article Link: https://www.bloomberg.com/news/articles/2023-11-05/trump-2024-election-ballot-challenges-could-go-to-supreme-court Links to an external site.

Panera’s Caffeine Lawsuit

Posted by Elizabeth Fernandez.

This article discusses a lawsuit against Panera for their highly caffeinated beverages that took the life of a UPenn college student. In September 2022, Sarah Kratz passed away because of what seemed to be no reason. Later it was found Kratz had a large charged lemonade from Panera which, “contains more caffeine than standard cans of Red Bull and Monster energy drinks combined, as well as the equivalent of almost 30 teaspoonfuls of sugar,” as said by Elizabeth Chuck. It was found that a large lemonade contains about 390 milligrams of caffeine and the regulated maximum “safe” amount of caffeine to consume is 400 milligrams daily. In the Panera Café Kratz went to before she passed, there was no major warning stating how much caffeine was in the charged lemonade or any mention of guarana extract, which is another stimulant that has similar effects to caffeine.

The reason why this death is becoming a big lawsuit is because Sarah Kratz had a pre-existing heart condition where she knew she should avoid large amounts of caffeine. As said by Sarah’s friend from Penn, “I guarantee if Sarah had known how much caffeine this was, she never would have touched it with a 10-foot pole.”, showing how cautious Kratz was about her heart condition. Panera’s failure to disclose the ingredients and caffeine levels in its charged drinks is the main issue in this lawsuit. At the Panera Cafes, the charged lemonades are put close to the regular fountain machine with drinks that have low to no caffeine. The charged lemonades are also very close to some of Panera’s regular lemonades and teas that are low in caffeine and are advertised as being “natural” and “clean”.

In my opinion, the Kratz family has every right to go through with the lawsuit, and Panera is in the wrong. Since Panera knew how much caffeine was in the drinks they were serving, they obviously knew how dangerous consuming high levels of caffeine is. Panera should have created a warning on the drink dispenser saying how much caffeine is in each cup size and a list of ingredients. As Chuck mentioned, Panera should discourage refills of the charged drinks to prevent overconsumption of caffeine because that could lead to serious health issues.  

Elizabeth is a marketing major at the Stillman School of Business, Seton Hall University, Class of 2026.

Work Cited

Chuck, Elizabeth. “Panera Now Displaying Warning about Its Caffeinated Lemonade in All Stores after Lawsuit over Customer’s Death.” CNBC, CNBC, 29 Oct. 2023, www.cnbc.com/2023/10/29/panera-now-displaying-warning-about-its-caffeinated-lemonade-in-all-stores-after-lawsuit-over-customers-death.html.

SpaceX Countersues Justice Department, Seeking to Dismiss Hiring Discrimination Case

Posted by Diego Fernandez.

SpaceX, the aerospace company founded by Elon Musk, has initiated a legal counterattack against the U.S. Department of Justice (DOJ), filing a lawsuit in a Texas federal court. This action is in response to the DOJ’s allegations of hiring discrimination by SpaceX against refugees and individuals granted asylum in the United States. The lawsuit challenges the DOJ’s case on constitutional grounds, marking a significant legal battle between a private company and a federal agency.

In its countersuit, SpaceX vehemently denies engaging in any discriminatory hiring practices. The company’s position, as articulated by its legal counsel Akin Gump Strauss Hauer & Feld, asserts that SpaceX is committed to hiring the most qualified candidates for every job, irrespective of their citizenship status. The company highlights its track record of hiring hundreds of noncitizens, affirming its inclusive employment practices.

Central to this legal dispute is the question of who SpaceX can employ under military technology regulations, particularly the International Traffic in Arms Regulations (ITAR) and the Export Administration Regulations. SpaceX argues that every employee within the company has access to technology and data governed by these statutory and regulatory frameworks. This contention underscores the complexity of the issue and its relevance to national security.

SpaceX’s rapid growth over the years is exemplified by its employment of more than 13,000 individuals across the United States. The company reveals that its job postings routinely receive a substantial number of applications, with an average of over 90 applications per posting and even more for engineering positions. This competitive hiring process, the company claims, is more selective than prestigious U.S. colleges, resulting in an acceptance rate of about 1%. The Department of Justice commenced its investigation into SpaceX in June 2020, prompted by a complaint of employment discrimination from a non-U.S. citizen. This investigation ultimately led to the DOJ’s legal action against the company, alleging unfair hiring practices.

In conclusion, SpaceX’s countersuit against the Department of Justice marks a significant legal battle with potentially far-reaching implications. The company vehemently denies any wrongdoing and is committed to defending its hiring practices on constitutional grounds. As the case unfolds, it will shed light on the intersection of national security regulations, immigration, and employment practices in the context of the aerospace industry. The outcome of this lawsuit will be closely watched by both legal experts and industry observers.

Diego is a marketing major at the Stillman School of Business, Seton Hall University.

https://www.cnbc.com/2023/09/20/spacex-countersues-doj-in-hiring-discrimination-case.htmlLinks to an external site.

Trademark and Political Phrases

Posted by Tyler Fernandes.

The article that I will be discussing is the article, “Trump wanted to trademark “Rigged Election” and other key findings from Jan.6 panels latest release” (CNN). I am a sophomore (2026) from Harrison, Nj that is looking to major in finance and accounting. Getting back to the article, it begins with the overview of the incident on January 6 and talks about the committee probe into this day. One key witness into this probe was Clearence Thomas’ wife which through reveled texts, contacted Trump to question election results. Another key piece of evidence in this trial was the account of a witness in Trump’s motorcade. This provided them with the authority, now having more credible information, to go along with the trial. Not only did this provide the party on offence which were the democrats to gain favorability, it also discredited the Republican party and its ideas.

The next part that the article goes into would be the very action that Trump wanted to trademark “rigged election”. Emails regarding this from Jared Kushner prove this to be a worthy and credible source for this information. Kusherner’s testimony, said the following, “Hey Jared! POTUS wants to trademark/own rights to below, I don’t know who to see – or ask…I don’t know who to take to” (CNN). This shows the credibility of Trump’s actions and not just words circulating around the media. Several other phrases also wanted to be trademarked including “Save America PAC”, in which Kusher also wrote, “Guys – can we do ASAP please?” (CNN). While this has little to no importance to the January 6 hearings, it also seems to further put the idea forward to Americans that former President Trump wanted “something to take with him” when he left office. In my opinion, I do not think this information should be mainstream to this extent as it does not prove or disprove his hearings.

As Trump’s lawyers at the time of this unruly period, tried to find a way and protect him from the Democratic party it became increasingly challenging to keep it hidden within the American eye. Whether it was the Russian collusion, January 6th, or even Covid, the misinformation and different opinions floating around within the media and the country, made it hard for the voters to hold a common interest with now the former president. Whilst in office president Trump overcome many obstacles including the ones mentioned, but found a way to make the country economically and politically safer.

Tyler is a finance major at the Stillman School of Business, Seton Hall University, Class of 2026.

Works Cited

https://www.cnn.com/2022/12/30/politics/january-6-transcript-release-latest/index.html

        

Internet Law: Laws and Regulations for Artificial Intelligence

Posted by Derek Diskant.

This article talks about artificial intelligence and the laws and regulations that should be put into place. Different parts of artificial intelligence are creating problems for law and society. For example, it mentions that “Generative AI creates not only new text, code, audio, or video, but problems with deepfakes, plagiarism, and falsehoods presented as convincing facts” (Theodore Claypoole). There are people that don’t understand how to use artificial intelligence responsibly and effectively. Instead, some people abuse it for what it can do and don’t understand the problems with that. The article states, “We need to think differently about AI before determining how to treat it” (Theodore Claypoole). Bad things could happen if people don’t think about AI differently and don’t consider the problems it could cause. Rules and regulations must be put in place to prevent these kinds of things from happening. However, the article mentions that “Passing a law to “restrict artificial intelligence” is a dangerous exercise under current definitions” (Theodore Claypoole). A law that restricts the use of artificial intelligence should not be passed because AI still has so much potential and is very beneficial. The article is trying to say that there needs to be laws and regulations put in place to prevent the problems artificial intelligence can cause and what people use it for instead.

The article talks about different categories of artificial intelligence and how they should be used. It proposes a list of different categories, which is meant to persuade legislators and regulators to consider the potential of artificial intelligence and help them make effective rules and laws. It mentions “Some of these lines blur, and certain technical or social problems are shared across classifications, but thinking of current AI solutions in legally significant functional categories will simplify effective rulemaking” (Theodore Claypoole). This should be taken into consideration because artificial intelligence could have a huge impact on the world. The different categories of artificial intelligence include Automating AI, Generative AI, Physical Action AI, Strategizing AI, Decisioning AI, Personal Identification AI, Differentiating AI, and Military AI. These kinds of artificial intelligence are used for different reasons and are meant to have a positive impact on the world, but in a safe manner. The article also mentions “The above categorizations provide a safer place to start if we wish to regulate a vast shifting technology. By adopting this thinking, AI management becomes less daunting and more effective” (Theodore Claypoole). This proposal is a great place to start if we want artificial intelligence to have a huge impact. It also provides a safe environment and ensures that effective laws are put in place to prevent anything bad from happening.

Overall, I agree that there should be laws and regulations put in place to prevent the problems that artificial intelligence can bring. These problems can include deepfakes, plagiarism, potential harm, taking over or replacing people’s jobs, etc. There are also people that abuse AI and take it for granted instead of using it responsibly. If people don’t use it responsibly, then bad things could happen and there are consequences that follow. However, I also agree that passing a law that restricts the use of artificial intelligence would be a bad idea because AI still has a lot of potential and can positively impact the world. There is still a lot to learn from AI and it can dramatically benefit people’s lives in so many ways. There are different types of AI’s, each having different uses and their own benefits. To stop the problems artificial intelligence can cause, there must be certain laws and regulations put in place to prevent these things from happening, while allowing AI to be used responsibly for any situation. I believe that this would be a great start because it allows us to use AI as a second-hand tool. While these different rules and regulations apply, AI can be used responsibly and as a second-hand tool to help us with any kind of situation. Furthermore, I agree that having effective laws put in place to stop the problems or harm artificial intelligence can bring would provide a safe environment for people to use it responsibly and effectively, and be served as a tool when necessary.

Derek is a finance major at the Stillman School of Business, Seton Hall University, Class of 2025.

Work Cited

Claypoole, Theodore. “AI Classifications for Law and Regulations.” American Bar Association Business Law Section, 15 September 2023, https://businesslawtoday.org/2023/09/ai-classifications-for-law-and-regulation/Links to an external site.

End of an Era: McDonald’s on Front Street Closes Amidst San Francisco’s Economic Shifts”

Posted by Nasser Coutinho.

The closure of the McDonalds on Front Street in downtown San Francisco signifies more than just the shutdown of a fast food outlet. It serves as a testament to the transformations occurring in urban areas following the COVID-19 pandemic. Having been in operation for thirty years, this establishment ultimately fell victim to the shifts within its surroundings. According to Scott Rodrick, from Rodrick Management Group “The economics of running a franchised restaurant in San Francisco continue to be a challenge” due to the high vacancy rates of downtown office buildings and the slow resurgence of tourism (Aitken, 2023). It’s clear from the article that the closure was not a matter of if, but when, as the “level of vibrancy” necessary to sustain such businesses has diminished since the pandemic (Aitken, 2023).

In a city once teeming with office workers and tourists, the quieted streets and empty offices have created an unsustainable situation for businesses that thrived on foot traffic. The decision to close was likened to a “gut punch” by Rodrick, a sentiment that underscores the distress many business owners feel as they navigate the post-pandemic economy (Aitken, 2023). The shift in the commercial real estate market is palpable, with properties changing hands at “steep discount” and a “full-on buyer’s market” emerging, revealing the extent of the economic downturn in these urban areas (Aitken, 2023).

The new wage law in California, which aims to increase the minimum wage for fast food workers to $20 by April 2024, is posing challenges for businesses like McDonalds. Chris Kempczinski, the CEO of McDonalds has voiced his concerns about the impact this will have on franchisee operations. “there will certainly be a hit in the short term to franchisee cash flow in California” (Aitken, 2023). These ongoing difficulties demonstrate the equilibrium between guaranteeing wages for employees and upholding a thriving business atmosphere in cities such as San Francisco, where the past vitality of the economy appears to be fading further away.

Nasser is a business student at the Stillman School of Business, Seton Hall University, Class of 2025.

Article Link: https://www.foxbusiness.com/fox-news-food-drink/downtown-san-francisco-mcdonalds-location-closes-30-years-not-recovered-pandemic

Are Social Media Companies Actually the Ones at Fault for the Increase of Mental Health Issues Amongst the Youth in America?

Posted by Angelina Carlisle.

There has been debate going around about whether or not social media companies are a result of the increase in mental health issues amongst the youth. Now, there are numerous laws that defend against this allegation, specifically Section 230 of the Communications Decency Act, which protects online platforms from what content is posted by its users and if these platforms decide to take it down. But if a company takes the content created by its users to target specific groups of people, particularly children and teens, preying off their insecurities and time to profit off them, it could be seen as a possible violation of this law.

An article posted by Fox Business discusses how several states across America are putting together a lawsuit to protect the mental well being of teens and children from these large companies.

“The lawsuit, which involves at least 32 states so far, alleges Meta “misled its users and the public by boasting a low prevalence of harmful content,” while being “keenly aware” its platforms’ features “cause young users significant physical and mental harm.” The filing says that Meta’s recommendation algorithm promotes “compulsive use,” which the company does not disclose. The lawsuit claims that social comparison features like “Likes” promote mental health harms for young users, while visual filters are known to promote body dysmorphia and eating disorders.” (Fox Business).

Therefore, social media companies are allegedly posting content to make people feel more self conscious of how they look and feel about themselves. But it also targets them socially, these platforms revolve around the mission to help create new connections, as well as strengthen new ones with people from anywhere around the world which increases the amount of time users spend on its platforms, but also means more money for these corporate giants.

Cases like these question the ethics of business and the effectiveness of the laws of Capitalism. For instance, it is seen to be unethical to sell someone a product you know does not work for your own profit, and depending on the product can scam them out of their money or potentially put them at risk. But under the ethics of Capitalism, that could be viewed as a flaw of the consumer for not knowing enough about the product they are purchasing to see it was a flop. Nowadays, this type of behavior is rewarded and we see it occur more often within big pharma. To put into perspective a pharmaceutical company send out medication with either lack of research/testing and being well aware of what will happen to the consumer by taking their product, but instead of their message saying “This will help cure you of illness A, B, and C”, it says “This might help cure you of illness A, B, and C”, therefore making them aware of their drug not working as promised, but making them less liable for the outcomes of its failure. So by the time a lawsuit comes around, all they will have to do, if that, is pay a small fine and might possibly have to take their product off of the market.

The point is social media companies work the same way. They promise you a fun, personalized experience, by collecting your personal information and feeding it to their algorithm in which case you are acknowledging and giving it permission to do so by accepting the terms and agreements in the beginning of the app. At this point you know that you could be setting yourself up to potentially being hooked into using the app, and are willingly sacrificing your time to use it. If and when it is brought to the companies attention of the potential downsides of this product, such as the increase in mental health issues of children, they can alter their code to help lessen this issue. But when the issue is addressed in court, they can make an argument stating that the user willingly accepted to use this app, acknowledging that it will be collecting personal information about its user to make it a more unique and enjoyable experience, and therefore cannot be held responsible for the amount of time users spend on the app. They can also turn around and blame it on the parents for not placing time restrictions on their children’s devices, which are available for them to use.

It is possible that we have hit a point in history where capitalism has hit its peak, by finding small loopholes within the areas of business law and business ethics to make the company either less liable or entirely unreliable for their attempt to misguide the promise it has made to its consumers. It will be intriguing to find out what happens to the case against Meta as if they will be found liable of promoting content which causes mental health issues to children and teens; it can be a huge turning point in the history of technology, business law, and how our personal information is being collected.

Angelina is a student at Seton Hall University, Class of 2025.

https://www.foxbusiness.com/technology/dozens-states-sue-meta-alleging-social-media-profoundly-altered-mental-social-realties-american-youth