04.10.24

Elon Musk is facing an investigation in Brazil over Fake news on X

BY Fast Company 4 MINUTE READ

A crusading Brazilian Supreme Court justice has included Elon Musk as a target in an ongoing investigation over the dissemination of fake news, and has opened a separate investigation into the U.S. business executive for alleged obstruction.

In his decision, Justice Alexandre de Moraes noted that Musk on Saturday began waging a public “disinformation campaign” regarding the top court’s actions, and that Musk continued the following day—most notably with comments that his social media company X would cease to comply with the court’s orders to block certain accounts.

Musk, the CEO of Tesla and SpaceX who took over Twitter in late 2022, accused de Moraes of suppressing free speech and violating Brazil’s constitution, and noted on X that users could seek to bypass any shutdown of the social media platform by using VPNs, or virtual private networks.

Musk will be investigated for alleged intentional criminal instrumentation of X as part of an investigation into a network of people known as digital militias who allegedly spread defamatory fake news and threats against Supreme Court justices, according to the text of the decision. The new investigation will look into whether Musk engaged in obstruction, criminal organization, and incitement.

“The flagrant conduct of obstruction of Brazilian justice, incitement of crime, the public threat of disobedience of court orders, and future lack of cooperation from the platform are facts that disrespect the sovereignty of Brazil,” de Moraes wrote Sunday.

X’s press office did not reply to a request for comment from the Associated Press, and Musk hadn’t publicly commented as of Monday morning, apart from brief posts on X.

Brazil’s political right has long characterized de Moraes as overstepping his bounds to clamp down on free speech and engage in political persecution. In the digital militias investigation, lawmakers from former President Jair Bolsonaro’s circle have been imprisoned and his supporters’ homes raided. Bolsonaro himself became a target of the investigation in 2021.

The justice in March 2022 ordered the shutdown of messaging app Telegram nationwide on the grounds that the platform repeatedly ignored requests from Brazilian authorities, including a police request to block profiles and provide information linked to blogger Allan dos Santos, an ally of Bolsonaro’s accused of spreading falsehoods. Dos Santos’ account is one of those blocked on X in Brazil. Less than 48 hours after issuing his order in 2022, de Moraes said Telegram had complied and permitted it to resume operations.

De Moraes’ defenders have said his decisions, although extraordinary, are legally sound and necessary to purge social media of fake news as well as extinguish threats to Brazilian democracy—notoriously underscored by the January 8, 2023, uprising in Brazil’s capital that resembled the January 6, 2021, insurrection in the U.S. Capitol.

“Judicial decisions can be subject to appeal, but never to deliberate noncompliance,” Luís Roberto Barroso, the Supreme Court’s chief justice, said in a statement Monday.

On Saturday, Musk—a self-declared free speech absolutist—said on X that the platform would lift all restrictions on blocked accounts and predicted that the move was likely to dry up revenue in Brazil and force the company to shutter its local office.

“But principles matter more than profit,” he wrote.

Brazil is an important market for social media companies. About 40 million Brazilians, or about 18% of the population, access X at least once per month, according to the market research group Emarketer.

Musk later instructed users in Brazil to download a VPN to retain access if X was shut down and wrote that X would publish all of de Moraes’ demands, claiming they violate Brazilian law.

“These are the most draconian demands of any country on Earth!” he later wrote.

Brazil’s constitution was drafted after the 1964-1985 military dictatorship and contains a long list of aspirational goals and prohibitions against specific crimes, such as racism and, more recently, homophobia. But freedom of speech is not absolute.

Musk had not published de Moraes’ demands as of Monday morning and prominent blocked accounts remained so, indicating X had yet to act based on Musk’s previous pledges.

De Moraes’ decision warned against doing so, saying each blocked account that X eventually reactivates will entail a fine of 100,000 reais ($20,000) per day, and that those responsible will be held legally to account for disobeying a court order.

“Including Elon Musk in the digital militias investigation is one thing. Blocking X is another. With this, Moraes is making a nod, saying that he didn’t remain inert amid provocations from Elon Musk,” Carlos Affonso, director of Rio de Janeiro-based think tank Institute for Technology and Society, said by phone from Washington. “It is a warning shot so that lines aren’t crossed.”

Affonso, a professor of civil rights at the State University of Rio de Janeiro, on Monday was attending a symposium at Georgetown Law School about Brazil’s business climate and legislation and that the implications of de Moraes’ decision for Musk and X were “the talk of the town.” Affonso also wondered what the brewing spat might mean for Musk’s Starlink satellites that provide internet service to remote Brazilian regions like the Amazon rainforest and Pantanal wetlands.

Bolsonaro—who bestowed Musk with a prestigious medal when he visited Brazil in 2022—was among those encouraging Musk to follow through with his promises to publish documents, saying they would reveal how the top electoral court was pressured to interfere in the 2022 election that he lost. Bolsonaro has often made such claims, without any evidence.

“Our freedom today is largely in his hands,” Bolsonaro said about Musk in a live broadcast on social media Sunday night. “The action he’s taking, what he’s been saying and he hasn’t been intimidated and has said that he’s going to put forward this idea of fighting for freedom for our country. That’s good.”

The lower house lawmaker who is in charge of handling a bill that aims to establish rules for social media platforms said on X that the episode underscored the urgency of bringing the proposal to a vote. It was approved by the Senate in 2020. Brazil’s attorney general on Saturday night had already voiced his support for regulation.

“We cannot live in a society in which billionaires domiciled abroad have control of social networks and put themselves in a position to violate the rule of law, failing to comply with court orders and threatening our authorities. Social peace is nonnegotiable,” Jorge Messias wrote on X.

And President Luiz Inácio Lula da Silva’s minister of institutional relations, Alexandre Padilha, wrote Monday on X that the administration will support the Supreme Court and its probes, and work with Congress and civil society to build a regulatory framework.

FastCompany

via David Biller and Gabriela Sá Pessoa, Associated Press

04.08.24

The meaning of being an African Youtuber: Big audiences, no Big money

BY Fast Company 3 MINUTE READ

In April 2018, 31-year-old Nigerian content creator Tayo Aina’s video about rapper J. Cole’s performance in Nigeria went viral, amassing 1.1 million views.

Despite that success, because his audience was largely based in Africa Aina received just $132 from the platform—a relatively paltry payout, and significantly less than a creator with a predominantly Western audience would have received for the same performance metrics.

“In terms of playing on the global stage, I would say that we are trying our best, but we still have a long way to go,” says Aina, who boasts more than 827,000 subscribers on YouTube and more than 240,000 followers on Instagram. That’s because despite Africa’s growing creator economy, platform payouts favor Western audiences.

Nigeria—Africa’s most populous country—has a growing digital creator economy that contributes more than 1.45% of total gross domestic product and is projected to add 3 million-plus jobs by 2027. Globally, the creator economy’s size is estimated to be more than $250 billion and is expected to nearly double by 2027.

Across Africa, content creation has begun attracting young people in droves—in part due to growing internet and smartphone penetration. As of 2022, more than 384 million Africans consumed content and music on social media. Though only about 51% of Africa’s population currently has internet access, that percentage is expected to grow to 87% by 2030.

But as African content creators have found, geography has a big influence on how much money videos can make. YouTube ties its payouts to the cost for an advertiser to gain 1,000 impressions. The resulting compensation, called CPM, varies by country and genre of video. “You may have a lot of views, but the revenue is not equally compensated [in every country],” says John Karanja, whose Kenya-based channel Afrikan Traveller has 111,000 YouTube subscribers.

Aina estimates that “the same video will probably make 10 times more if you had an American audience,” because of the difference in CPM, which is reportedly as high as $10 in the U.S. and as low as $1 in African countries such as Nigeria and Kenya, according to market trends site Gitnux. This means that even as African YouTubers like Aina and Karanja cultivate audiences in their countries and beyond, their earning potential is stifled by the relatively low cost of acquisition for advertisers. The result is a need to shift focus to appeal to international audiences.

“You have to diversify your content and just focus on topics that have a much broader appeal instead of niching down to something that might just have people only from Nigeria watch you.” He’s focused on broadening his appeal with travel-related content in places like the Caribbean, which resonates with audiences in Africa and the United States. Aina’s video on exploring Los Angeles for 30 days garnered him his biggest audience outside of Africa, raking in more than 800,000 views.

As he works to build an audience in markets coveted by advertisers, Aina has had difficulty traveling internationally—a core part of being a travel vlogger—because his Nigerian passport allows him entry into just 28 countries without a visa. “(Someone) who has an American passport can travel almost anywhere—they don’t need to apply for visas,” he says. “That means they can create more content [than I can].”

Over the past few years, several countries have rejected Aina’s visa applications. Despite building a strong travel history, in 2022 he was denied entry to Dubai on his way to a conference, an experience he says led him to find an alternative. His solution was a pricey one—obtaining a passport in St. Kitts and Nevis via its “Citizenship by Investment” program. The process—documented in a recent video—cost $150,000, but now he can travel to 140 countries without a visa.

Aina acknowledges that he is an exception, noting it took “six-plus years to get to the point where I can afford to buy a passport.” But even African YouTubers who are unable to afford a second passport and can’t travel as much still see content creation as a viable economic opportunity—and even smaller payouts than Western countries can be relatively lucrative.

More than the money, Karanja says African creators want to “write our own stories” and showcase different countries and communities in a way that has only been done from a Western perspective so far, and that celebrate the continent’s growing opportunities. “Yes, there’s poverty. And yes, there’s wildlife,” he says. “But there’s development that is happening. There’s a future. There is hope and we see it.”

FastCompany

04.04.24

This Student got into trouble for using an AI Tool: Grammarly

BY Fast Company 9 MINUTE READ

Marley Stevens posted a video on TikTok last semester that she described as a public service announcement to any college student. Her message: Don’t use grammar-checking software if your professor might run your paper through an AI-detection system.

Stevens is a junior at the University of North Georgia, and she has been unusually public about what she calls a “debacle,” in which she was accused of using AI to write a paper that she says she composed herself except for using standard grammar- and spell-checking features from Grammarly, which she has installed as an extension on her web browser.

That initial warning video she posted has been viewed more than 5.5 million times, and she has since made more than 25 follow-up videos answering comments from followers and documenting her battle with the college over the issue—including sharing pictures of emails sent to her from academic deans and images of her student work to try to prove her case—to raise awareness of what she sees as faulty AI-detection tools that are increasingly sanctioned by colleges and used by professors.

Stevens says that a professor in a criminal justice course she took last year gave her a zero on a paper because he said that the AI-detection system in Turnitin flagged it as robot-written. Stevens insists the work is entirely her own and that she did not use ChatGPT or any other chatbot to compose any part of her paper.

As a result of the zero on the paper, she says, her final grade in the class fell to a grade low enough that it kept her from qualifying for a HOPE Scholarship, which requires students to maintain a 3.0 GPA. And she says the university placed her on academic probation for violating its policies on academic misconduct, and she was required to pay $105 to attend a seminar about cheating.

The university declined repeated requests to talk about its policies for using AI detection. Officials instead sent a statement saying that federal student privacy laws prevent them from commenting on any individual cheating incident, and that: “Our faculty communicate specific guidelines regarding the use of AI for various classes, and those guidelines are included in the class syllabi. The inappropriate use of AI is also addressed in our Student Code of Conduct.”

The section of that student code of conduct defines plagiarism as: “Use of another person or agency’s (to include Artificial Intelligence) ideas or expressions without acknowledging the source. Themes, essays, term papers, tests and other similar requirements must be the work of the student submitting them. When direct quotations or paraphrase are used, they must be indicated, and when the ideas of another are incorporated in the paper they must be appropriately acknowledged. All work of a Student needs to be original or cited according to the instructor’s requirements or is otherwise considered plagiarism. Plagiarism includes, but is not limited to, the use, by paraphrase or direct quotation, of the published or unpublished work of another person without full and clear acknowledgement. It also includes the unacknowledged use of materials prepared by another person or agency in the selling of term papers or other academic materials.”

WHAT’S THE DIFFERENCE BETWEEN ACCEPTABLE AI USE AND CHEATING?

The incident raises complex questions about where to draw lines regarding new AI tools. When are they merely helping in acceptable ways, and when does their use mean academic misconduct? After all, many people use grammar and spelling autocorrect features in systems like Google Docs and other programs that suggest a word or phrase as users type. Is that cheating?

And as such grammar features become more robust as generative AI tools become more mainstream, can AI-detection tools possibly tell the difference between acceptable AI use and cheating?

“I’ve had other teachers at this same university recommend that I use [Grammarly] for papers,” Stevens said in another video. “So are they trying to tell us that we can’t use autocorrect or spellcheckers or anything? What do they want us to do, type it into, like, a Notes app and turn it in that way?”

In an interview with EdSurge, the student put it this way:

“My whole thing is that AI detectors are garbage and there’s not much that we as students can do about it,” she says. “And that’s not fair because we do all this work and pay all this money to go to college, and then an AI detector can pretty much screw up your whole college career.”

Along the way, this University of North Georgia student’s story has taken some surprising turns.

For one, the university issued an email to all students about AI not long after Stevens posted her first viral video.

That email reminded students to follow the university’s code of academic conduct, and it also had an unusual warning: “Please be aware that some online tools used to assist students with grammar, punctuation, sentence structure, etc., utilize generative artificial intelligence (AI); which can be flagged by Turnitin. One of the most commonly used generative AI websites being flagged by Turnitin.com is Grammarly. Please use caution when considering these websites.”

INCONSISTENCIES IN AI-DETECTION TOOLS

The professor later told the student that he also checked her paper with another tool, Copyleaks, and it also flagged her paper as bot-written. Stevens says that when she ran her paper through Copyleaks recently, it deemed the work human-written. She sent a screenshot from that process, in which the tool concludes, in green text, “This is human text.”

“If I’m running it through now and getting a different result, that just goes to show that these things aren’t always accurate,” she says of AI detectors.

Officials from Copyleaks did not respond to requests for comment. Stevens declined to share the full text of her paper, explaining that she did not want it to wind up out on the internet where other students could copy it and possibly land her in more trouble with her university. “I’m already on academic probation,” she says.

Stevens says she has heard from students across the country who say they have also been falsely accused of cheating due to AI-detection software.

“A student said she wanted to be a doctor but she got accused, and then none of the schools would take her because of her misconduct charge,” says Stevens.

SUPPORT FROM GRAMMARLY

Stevens says she has been surprised by the amount of support she has received from people who watch her videos. Her followers on social media encouraged her to set up a GoFundMe campaign, which she did to cover the loss of her scholarship and to pay for a lawyer to potentially take legal action against the university. So far she has raised more than $6,100 from more than 90 people.

She was also surprised to be contacted by officials from Grammarly, who gave $4,000 to her GoFundMe and hired her as a student ambassador. As a result, Stevens now plans to make three promotional videos for Grammarly, for which she will be paid a small fee for each.

“At this point we’re trying to work together to get colleges to rethink their AI policies,” says Stevens.

For Grammarly, it seems clear that the goal is to change the narrative from that first video by Stevens, in which she said, “If you have a paper, essay, discussion post, anything that is getting submitted to TurnItIn, uninstall Grammarly right now.”

Grammarly’s head of education, Jenny Maxwell, says that she hopes to spread the message about how inaccurate AI detectors are.

“A lot of institutions at the faculty level are unaware of how often these AI-detection services are wrong,” she says. “We want to make sure that institutions are aware of just how dangerous having these AI detectors as the single source of truth can be.”

Such flaws have been well documented, and several researchers have said professors shouldn’t use the tools. Even Turnitin has publicly stated that its AI-detection tool is not always reliable.

Annie Chechitelli, Turnitin’s chief product officer, says that its AI detection tools have about a 1% false positive rate according to the company’s tests, and that it is working to get that as low as possible.

“We probably let about 15% [of bot-written text] go by unflagged,” she says. “We would rather turn down our accuracy than increase our false-positive rate.”

Chechitelli stresses that educators should use Turnitin’s detection system as a starting point for a conversation with a student, not as a final ruling on the academic integrity of the student’s work. And she says that has been the company’s advice for its plagiarism-detection system as well. “We very much had to train the teachers that this is not proof that the student cheated,” she says. “We’ve always said the teacher needs to make a decision.”

AI’S CHALLENGING POSITION FOR STUDENTS AND TEACHERS

AI puts educators in a more challenging position for that conversation, though, Chechitelli acknowledges. In cases where Turnitin’s tool detects plagiarism, the system points to source material that the student may have copied. In the case of AI detection, there’s no clear source material to look to, since tools like ChatGPT spit out different answers every time a user enters a prompt, making it much harder to prove that a bot is the source.

The Turnitin official says that in the company’s internal tests, traditional grammar-checking tools do not set off its alarms.

Maxwell, of Grammarly, points out that even if an AI-detection system is right 98% of the time, that means it falsely flags, say, 2 percent of papers. And since a single university may have 50,000 student papers turned in each year, that means if all the professors used an AI detection system, 1,000 papers would be falsely called cases of cheating.

Does Maxwell worry that colleges might discourage the use of her product? After all, the University of North Georgia recently removed Grammarly from a list of recommended resources after the TikTok videos by Stevens went viral, though they later added it back.

“We met with the University of North Georgia and they said this has nothing to do with Grammarly,” says Maxwell. “We are delighted by how many more professors and students are leaning the opposite way—saying, ‘This is the new world of work and we need to figure out the appropriate use of these tools.’ You cannot put the toothpaste back in the tube.”

For Tricia Bertram Gallant, director of the Academic Integrity Office at the University of California San Diego and a national expert on cheating, the most important issue in this student’s case is not about the technology. She says the bigger question is about whether colleges have effective systems for handling academic misconduct charges.

“I would be highly doubtful that a student would be accused of cheating just from a grammar and spelling checker,” she says, “but if that’s true, the AI chatbots are not the problem, the policy and process is the problem.”

“If a faculty member can use a tool, accuse a student, and give them a zero and it’s done, that’s a problem,” she says. “That’s not a tool problem.”

She says that conceptually, AI tools aren’t any different than other ways students have cheated for years, such as hiring other students to write their papers for them.

“It’s strange to me when colleges are generating a whole separate policy for AI use,” she says. “All we did in our policy is add the word ‘machine,’” she says, noting that now the academic integrity policy explicitly forbids using a machine to do work that is meant to be done by the student.

She suggests that students should make sure to keep records of how they use any tools that assist them, even if a professor does allow the use of AI on the assignment. “They should make sure they’re keeping their chat history” in ChatGPT, she says, “so a conversation can be had about their process” if any questions are raised later.

A FAST-CHANGING LANDSCAPE

While grammar and spelling checkers have been around for years, many of them are now adding new AI features that complicate things for professors trying to understand whether students did the thinking behind the work they turn in.

For instance, Grammarly now has new options, most of them in a paid version that Stevens didn’t subscribe to, that use generative AI to do things like “help brainstorm topics for an assignment” or to “build a research plan,” as a recent press release from the company put it.

Maxwell, from Grammarly, says the company is trying to roll out those new features carefully, and is trying to build in safeguards to prevent students from just asking the bot to do their work for them. And she says that when schools adopt its tool, they can turn off the generative AI features. “I’m a parent of a 14-year-old,” she says, adding that younger students who are still learning the basics have different needs than older learners.

Chechitelli, of Turnitin, says it’s a problem for students that Grammarly and other productivity tools now integrate ChatGPT and do far more than just fix the syntax of writing. That’s because she says students may not understand the new features and their implications.

“One day they log in and they have new choices and different choices,” she says. “I do think it’s confusing.”

For the Turnitin leader, the most important message for educators today is transparency in what, if any, help AI provides.

“My advice would be to be thoughtful about the tools that you’re using and make sure you could show teachers the evolution of your assignments or be able to answer questions,” she says.

Gallant, the national expert on academic integrity, says that professors do need to be aware of the growing number of generative AI tools that students have access to.

“Grammarly is way beyond grammar and spelling check,” she says. “Grammarly is like any other tool—it can be used ethically or it can be used unethically. It’s how they are used or how their uses are obscured.”

Gallant says that even professors are running into these ethical boundaries in their own writing and publication in academic journals. She says she has heard of professors who use ChatGPT in composing journal articles and then “forget to take out part where AI suggested ideas.”

There’s something seductive about the ease of which these new generative AI tools can spit out well-formatted texts, she adds, and that can make people think they are doing work when all they are doing is putting a prompt in a machine.

“There’s this lack of self-regulation—for all humans but particularly for novices and young people—between when it’s assisting me and when it’s doing the work for me,” Gallant says.

This article was syndicated from EdSurge. EdSurge is a nonprofit newsroom that covers education through original journalism and research. Sign up for their newsletters.

Jeffrey R. Young is an editor and reporter at EdSurge and host of the weekly EdSurge Podcast.

04.03.24

Apple will no longer support this iPhone

BY Fast Company 2 MINUTE READ

Time marches quickly for just about anyone. But in the technology world, it can positively gallop.

Apple has declared the iPhone 6 Plus to be “obsolete” technology, meaning it will no longer repair or service the device—and service providers are no longer able to order parts for the products.

The declaration for older technology comes roughly seven years after the company has stopped offering it at retail. The iPhone 6 Plus was removed from stores in September 2016. Owners of the regular iPhone 6 have a little more time, since that device was sold for a bit longer at retail.

Both the iPhone 6 and 6 Plus made their debuts in September 2014. While some die-hards might still have the phones, most have likely moved on in that time. The phone has not been able to update beyond iOS 13 since 2019.

As the iPhone 6 moves into the world of obsolescence, a few other Apple products have taken a step closer to that dreaded label.

Apple has added the iPad Mini 4 to its “vintage” list. That means it has been at least five years since the company last sold the device. (Vintage is the final classification before becoming “obsolete” in Apple’s vernacular.)

Vintage is, in essence, your warning period to either get a new device or start saving for one. (The iPhone 6, for instance, has been “vintage” since September 2022.) You can no longer expect to receive software upgrades when it reaches this point, and Apple will not guarantee its ability to repair the device. (Third-party repair facilities can still get parts, generally, but that too is not a guarantee.)

If there’s a major security flaw discovered with a vintage device, Apple may send an update, but if you’re using an obsolete one, you’re on your own—no exceptions.

The iPad Mini 4 wasn’t the only new addition to the “vintage” list. The PRODUCT(RED) iPhone 8 and iPhone 8 Plus have also been added. Other models of that phone, however, are not considered vintage yet.

Electronic waste (e-waste) is an increasingly large problem and has been called one of the fastest-growing climate change challenges. Over 2 billion PCs, tablets, and mobile phones ship each year—and many people don’t know what to do with the products when they get new ones.

A United Nations report released last month found that generation of e-waste around the world is rising five times faster than documented e-waste recycling. In 2022, 62 million tons of e-waste was generated. That’s enough to fill 1.55 million 40-ton trucks, which would be roughly enough to form a bumper-to-bumper ring encircling the equator.

Apple, years ago, came under fire for contributing to this by allegedly purposely slowing down older devices to encourage people to buy new ones. The company faced a class-action suit for the practice and settled for $500 million. (Checks began going out to customers who submitted a claim beginning earlier this year.)

Apple has since taken steps to reduce its carbon footprint, announcing plans to eliminate all plastic from its packaging by 2025 and long ago switching to 100% green energy. The company also says it uses as much recyclable material as possible in its creation of new iPhones and offers free recycling of its products when customers bring them to stores.

The company, last year, also began supporting “right to repair” laws in California and other states.

FastCompany

04.02.24

You are invited to co-design this smartphone

BY Fast Company 2 MINUTE READ

London-based tech startup Nothing wants you to help design its next phone.

The company is inviting fans to take control of the next model of its phone by crowdsourcing aspects of its design. Nothing has charmed the tech press with its Android phone models that feature the company’s signature transparent hardware and programmable flashing lights (nicknamed “Glyphs”).

With its upcoming launch, the Phone (2a), the company wants to see where fans will take the design. Through the Community Edition Project, fans will have unprecedented access to the company’s behind-the-scenes design process.

“From opening community investment rounds to inviting community-elected representatives to our board meetings, we started our brand with our community and have continued to grow with it in lockstep,” a Nothing spokesperson wrote in an email to Fast Company. “With the Community Edition Project, we’re excited to cocreate with our talented community members and build something great together.”

The project will roll out in four phases: hardware, wallpaper, packaging, and marketing. Followers are encouraged to submit their own original designs for each step, and winners will be invited to work directly with the Nothing team to execute their visions. While submissions for the hardware stage can’t make adjustments to the physical geometry of the Phone (2a), entrants are free to change any other elements of the phone’s back panel.

“Consider all of the visible components on Phone (2a): the NFC coil, body, buttons, rear cover, and let your imagination take you somewhere else,” the company’s website reads. “By experimenting specifically with the color, materials, and finishes of those components, what else could the back of the phone look like?”

Submissions will be accepted in any form (video, drawing, rendering, etc.) as long as they meet Nothing’s overall guidelines. Community members will then help the Nothing design team decide on the winning entry. The first stage is open now and closes April 16; the whole process is slated to take about six months.

For those drafting their applications, Nothing suggests starting with a couple of basic questions: “Is there a story to tell? Is there an unexpressed aspect of the Nothing DNA we have yet to reveal?” It will be up to users to decide.

FastCompany

03.21.24

The Venture Capitalism’s impact on work – University Professor view

BY Fast Company 6 MINUTE READ

Venture capitalism is a manifestation of structural changes that increasingly shifted power into the hands of the financial sector, which began to cement its influence over the economy following the crisis of the 1970s. Amid increased competition, rampant inflation, and rising energy costs, American corporations’ profit margins began to stagnate. Powerful actors responded to this threat by mobilizing for changes in corporate governance and public policy to reinvigorate profits—a social movement of the elite aimed at reinventing the corporation.

The owners of large firms—their shareholders—increasingly held executives accountable for the slowed growth in profits. Investors advocated for the “shareholder value” conception of the firm, according to which the sole purpose of publicly held US corporations is to maximize the price of a company’s shares on the stock market, thereby increasing the returns to owners. Shareholders organized to increase pressure on executives, incentivizing them to make decisions that would be perceived as prioritizing investors’ interests.

The so-called “shareholder revolution” changed the nature of the game that corporations were playing. Previously, executives had been focused on increasing sales to maintain their companies’ growth and stability, reinvesting gains in developing products and workers. At the same time, they attended to their responsibilities to an array of stakeholders, including their customers, employees, and the communities in which they operated.

General Electric’s 1953 shareholder report touted how the company worked “in the balanced best interests of all,” describing how much the company paid in salaries, benefits, and taxes before mentioning that it had returned a modest 3.9 percent of sales to investors. Today, executives must commit to pleasing shareholders who view the corporation not as a social institution but as a bundle of assets. It has become less important for companies to focus on balancing their books and more important that they increase the firm’s market value every quarter, regardless of the instability that may result from their actions. Companies are designed to redistribute resources upward and risk downward. Managers are duty-bound to maximize the returns delivered to investors; considerations of the social value they create or the harms they inflict on workers and societies are secondary.

As the criteria for being considered a successful company changed, firms altered how they operated. Corporate reorganizations became more common, and companies adopted more cost-cutting technologies and employment practices such as layoffs, outsourcing, and scaling back compensation and fringe benefits. At the same time, ostensibly nonfinancial firms, like General Electric and General Motors, increasingly pursued financial activities like mortgage lending as a means of generating profits.

As companies found new ways to trim costs and boost revenue, executives began to siphon off a far greater share of corporate profits to investors. In the 1970s, publicly traded US companies paid their shareholders about one-third of their earnings via dividend payments. A 1982 rule change at the Securities and Exchange Commission allowed corporations to buy shares of their own stock, rewarding investors with inflated share prices by reducing the supply of company stock on the market.

Since then, stock buybacks have come to consume most of the earnings of S&P 500 companies. By the late 1980s, publicly traded corporations were distributing more than 100 percent of their profits to shareholders via dividends and stock buybacks, either by drawing down savings or selling off assets to pay investors more than the companies had earned. This has left corporations with less money to invest in opening new plants and stores or training and compensating workers. During the 2010s, publicly traded corporations spent over $3.8 trillion on their own stocks—more than every other type of investor (e.g., mutual funds, pension funds, foreign investors, and individuals) combined.

These developments were indicators of the trend toward financialization— what economist Gerald A. Epstein has described as “the increasing role of financial motives, financial markets, financial instruments, financial actors, and financial institutions in the operations of domestic and international economies.” Deregulation of the banking sector during the 1980s, the concentration of the financial industry, and the introduction of innovative financial products further contributed to the dominance of financial actors and activities in the US economy.

The rise of venture capital funds as a mainstream investment option for wealthy individuals and institutions is among the most visible manifestations of this trend. VC investment decisions are premised upon the belief that, at some point in the future—through the sale of shares to another investor during a subsequent round of VC funding, corporate acquisition, or initial public stock offering—another party may be willing to pay a substantially higher price for a comparable ownership stake in the firm. In this sense, venture capital is no different from financial activities in other segments of the finance industry. In the words of one investment banker, “at the end of the day, with any investment product, you might say, you’re looking for somebody else to pay you more for it.”

Yet there are also aspects of venture capitalism that are not adequately accounted for by theories that specify how finance capital affects organizational structures and practices. Capitalism is a system characterized by “dynamic disequilibrium,” so it’s no surprise that even after tech startups grow into publicly traded corporations, they continue to innovate as they compete for attention and dollars. But venture-backed startups represent a supercharged version of financialization that takes its core logics to extremes.

VC investments are far more speculative than investments in publicly traded firms, and VCs invest with the knowledge that most of the firms they fund will not survive. Startups maximize flexibility not to wring more efficiency out of existing operations, but instead to facilitate constant experimentation aimed at rapid and precipitous growth. Startup workers build companies on quicksand; if organizations are to survive while developing untested products in fast-changing environments, everything must be subject to change.

The consequences of the rise of finance have been far-reaching, particularly for workers. In an increasingly financialized economy, workplaces and work are increasingly structured to serve the interests of investors, often at the expense of employees. Before the shareholder revolution, firms typically hired additional workers to cover new roles and responsibilities associated with growth. Now, however, companies (and their investors) prioritize organizational flexibility. In practical terms, this means that the corporate workforce has become increasingly bifurcated.

Organizations typically invest in a smaller “core” of well-compensated employees and maintain arms-length relationships with a greater share of (often outsourced, subcontracted, or part-time) “peripheral” workers, many of whom possess less specialized skills, receive lower wages, and enjoy fewer of the legal protections associated with full-time employment. Workers with previously secure jobs have found themselves exposed to more insecurity in the labor market. The availability of middle-class union jobs for people holding only a high school degree has plummeted. Median employment tenure has shrunk, as has the percentage of employees receiving fringe benefits like medical coverage and defined benefit retirement plans. Meanwhile, protections like unemployment and health insurance remain tied to full-time employment, failing to reflect the rise of nonstandard employment arrangements.

These changes in the relationship between workers and employers have contributed to a staggering rise in income inequality. Between 1980 and 2014, the top 1 percent of earners in the United States saw their share of the national income double, from about 10 to 20 percent. Workers in the financial sector have been among those driving this phenomenon: the increasing profitability of financial institutions has allowed its workforce to claim a wage premium of 50 percent over workers in other industries. Yet, for workers in the bottom three-quarters of the income distribution, wages have been stagnant. The increasing importance of financial activities in corporations has decreased the relative value of workers involved in productive activities; along with the declining power of organized labor, this has left workers with less leverage to advocate for their own interests within firms. In short, workers’ cut of the national income has decreased even as productivity has risen, signaling a redistribution of income from workers to managers, executives, and investors.

In venture-backed startups, where stock options are commonly included in privileged employees’ compensation packages, some workers may find themselves in a unique position, inhabiting the role of employee while simultaneously sharing investors’ dreams of a massive payout. Yet, unlike investors, startup workers—who may log long hours in precarious jobs while in some cases even being asked to forego their salaries—find that their fortunes are tied to particular companies or industries, leaving them with fewer opportunities to diversify their risk portfolios. Founders value organizational flexibility both because they genuinely do not know what the future will hold for their startups, and because they know that investors would be wary of companies that make long-term commitments to specific people and processes.

This dynamic makes startup cultures ideal sites in which to observe how the financialization of the economy is transforming workplaces and workers’ subjectivities. Sociologists have long endeavored to situate labor relations within their social contexts to understand the cultural dimensions of work. Managers and workers participate in organizational cultures that endow tasks with meanings and values, which in turn matter for how workers are motivated, how tasks are executed, and how workplace technologies are deployed. Venture capitalism invites workers to dwell in fantasies of how being a part of a startup could transform their lives.

Behind the Startup thus attends not only to fluidity in the organization of production at AllDone, but also to how startup workers’ livelihoods and emotions are linked to the imaginaries invoked by venture capital and the organizational flux that it instigates.

Benjamin Shestakofsky is an assistant professor of sociology at the University of Pennsylvania, where he is affiliated with AI at Wharton and the Center on Digital Culture and Society. He is author of Behind the Startup: How Venture Capital Shapes Work, Innovation, and Inequality.

FastCompany

03.19.24

Forget Facebook and X on news, LinkedIn is closing the News gap

BY Fast Company 6 MINUTE READ

Large online platforms have largely given up on the news business. Meta finally removed its dedicated tools for news publishers. Google is experimenting with removing the news tab from search results. AI chatbots are eating the last remaining ways that publishers can drive traffic to their sites. And Elon Musk, the owner of X, the site formerly known as Twitter, spends most of his days railing against the mainstream media.

All of this has led to some pretty serious soul-searching among America’s journalists. Is the future email newsletters? Will podcasts save the news? Does everything need to be short vertical video now? Well, here’s a question that it might be time to start asking: What about LinkedIn?

Let’s first get the obvious out of the way: LinkedIn has never been a particularly sexy online platform. Yes, it has a huge amount of users — their site currently boasts about a billion across hundreds of countries. But it’s less clear how many of them are actively using it on a daily basis to read and share content. A spokesperson for LinkedIn tells me that over 100 million members are interacting with content in their feeds every week.

When its users are creating and engaging with public content on its main feed, it also tends to be somewhat different than what you might see opening up, say, X or Threads. A LinkedIn account is tied to your work history and, assumedly, your real identity. Which means LinkedIn posts tend to oscillate between bland and deeply unhinged. In 2017, the latter, a capitalist stream of consciousness posting popular with the site’s business-centric super-posters, was nicknamed “broetry“. That culture is not nearly as prominent on the platform as it used to be — much of it spread to X during the 2020 crypto bull market (back when it was still known as Twitter) — but there’s still a general HR-friendly, work-safe vibe to the whole place.

But people are getting their news on LinkedIn.

According to a Pew survey released last November, a little under a quarter of LinkedIn users say they get their news on the site. According to that same survey, LinkedIn news consumers are fairly evenly split between men and women, are overwhelmingly liberal, and almost 70% of them are under 49. So even though the platform may feel like an artifact from a different era of the web, where social networks functioned primarily as directories of personal contacts, that does appear to be changing.

As for what they’re reading and who they’re following, it’s a little harder to figure out. If you try and look up who the top influencers are on LinkedIn, you’ll find the same lists of well-known business personalities — Bill Gates, Richard Branson, Gary Vaynerchuk. And while they might be sharing content and have millions of followers, it’s not exactly journalism. Vaynerchuk, in particular, is a super poster, but all he really talks about is himself. Though, his new wine tasting show is pretty fun.

If you want to see a good example of what kind of thing is going viral on LinkedIn at any moment, this video a product manager in Madrid posted has blown up on LinkedIn. It is, essentially, a video resume. The comments underneath are impossibly positive, which, according to creators using LinkedIn I’ve spoken to, is largely true for everything shared to the site. (Though it is still a social network and people will argue with each other.)

But being an online platform that publishers might be able to actually rely on goes beyond influencer link-sharing. And it’s in this area that LinkedIn does actually appear to be committed. At least more than other platforms.

The site has provided what’s honestly an incredibly powerful journalism tool to reporters for over a decade. It has also in the last few years launched a podcast network, a native newsletter product, and a premium subscription tool. LinkedIn’s spokesperson says they’re working directly with over 400 publishers and those publishers have gained a combined 240 million followers. And this kind of support isn’t new actually.

Of course, many platforms have some version of these features now. So are they enough to actually turn LinkedIn users into a real audience?

Journalist Alex Kantrowitz thinks so. In many ways, Kantrowitz is the perfect candidate for appraising whether or not LinkedIn is a suitable home for online journalism right now. He’s the current digital media walkabout personified. He was one of the reporters to coin the term “LinkedIn broetry” back in 2017. Since then he’s started his own Substack publication, called Big Technology, and two years ago, began working with LinkedIn on his Big Technology podcast.

“The podcast has tripled in size in two years,” he tells me.

Kantrowitz says one of the biggest surprises is how much friendlier LinkedIn users are compared to other platforms. “They realize that everything they write there is going to be seen by anyone who they work with, or has the potential to hire them. So the comments tend to be more constructive than other social networks,” he says.

His podcast isn’t distributed inside of LinkedIn, though. It goes out via Megaphone and is supported by LinkedIn’s ad network. But by working with LinkedIn, he’s also grown his presence on the platform, as well. “This is directionally accurate,” he says. “I think I’ve gone from around 4,000-5,000 followers on LinkedIn when I started working with the podcast network to 20,000 today.”

And it’s this kind of growth that is beginning to make LinkedIn feel like a viable replacement for the journalism world’s favorite social network, Twitter, or the traffic powerhouse that Facebook used to be. Though, with some pretty massive caveats.

LinkedIn is a professional network, by definition. And even though the company has rolled out entertainment features, like vertical videos, that isn’t changing. Also, while its users are sharing articles from large publishers, the articles that are performing best on the site tend to be almost exclusively about business.

In December, however, LinkedIn’s Ads Blog shared a list of the most engaging articles on LinkedIn in 2023. All of them had some connection to corporate America. The two most-engaged with articles, with over 100,000 engagements each, were a Washington Post story titled, “A four-day workweek pilot was so successful most firms say they won’t go back” and a Vogue Business story titled, “The future of influencer marketing is offline and hyper-niche“.

Kyley Schultz, the assignment editor for The Washington Post‘s social team, says her team has in recent months started to take LinkedIn more seriously as a traffic source. The paper launched a newsletter on the platform called Post Grad, which has a quarter of a million readers. (Fast Company debuted its AI Decoded newsletter on LinkedIn last year, and in just 10 months it’s already amassed over 210,000 followers.)

As Schultz sees it, the point of finding a new home for news online isn’t about finding a feed you can dump your stories into and expect people to mindlessly click. And if publishers think that LinkedIn is the place that strategy will finally work, they are very mistaken.

“People are going to be turned off by that and go somewhere else,” she says.

She also says that as she’s begun using the social network more she’s begun to wonder if the social network is actually more versatile than its reputation leads people to believe. She says that the notion that LinkedIn users only want to read about business content is a bit of a self-fulfilling prophecy.

“What is the success rate of someone completely pivoting and trying something else,” she says of posting a more divorce,” she says of posting more diverse types of stories on LinkedIn. “Like, is it actually going to fail? Or are there just not enough people trying it?”

And as more publishers begin using the site more consistently, that could change. In fact, social media analyst Matt Navarra tells me it’s not impossible to imagine LinkedIn evolving into a more mainstream feeling social network as it becomes a destination for news content.

“It’s very much more like a traditional social network where people are sharing news and memes and funny stuff,” he says.

He says his personal LinkedIn usage goes in phases, but nowadays it’s not uncommon to see pretty much the same content you see on sites like X and Threads, just with slightly more polite replies underneath. And like Kantrowitz, he thinks the lack of toxicity is why news is doing better there.

“It doesn’t have quite as much of the shit,” he says. “The way that people engage is less controversial and troubling. And therefore it’s easier for [LinkedIn] to stick with news and not have all the problems of misinformation because they don’t seem to have that behavior.”

But the lack of toxicity might not be as real as its creators think it is.

Last year, LinkedIn added a “rewrite with AI” tool that has been criticized for opening the floodgates on AI spam. And AI-generated profile pictures have been an issue on the site for years. As have fake commenters. And the real test for LinkedIn’s super positive community was Vivek Ramaswamy’s short-lived presidential campaign which was, in part, driven by his LinkedIn posts. Ramaswamy’s account was briefly locked after the site determined his posts contained “misleading or inaccurate information.” It’s unlocked now, but he hasn’t posted in six months.

But finding a home for news publishers in 2024 isn’t about finding a perfect fit, but rather finding one that’s close enough. The traffic firehose days of the 2010s aren’t coming back. And LinkedIn is not the secret to infinite pageviews. But it might be fertile ground to build an audience with manageable issues.

For all its retro, business casual vibe, it’s more in line with the way we tend to use the internet now. Users aren’t looking for a one-stop shop, a central feed to consume all of their content. They’re using specific platforms to express specific parts of themselves. And though internet engagement is always a toss up, there is one constant we can always count on: People at work are desperate for something to do other than work, and the news can serve as a nice distraction.

FastCompany

03.14.24

Apple Vision Pro is already being used by Medical staff. Here’s how

BY Fast Company 3 MINUTE READ

Two British surgeons say that they used Apple’s new $3,500 headset to carry out Britain’s first virtual-reality operation. The team at London’s Cromwell Hospital, led by orthopedic surgeons Fady Sedra and Syed Aftab, used the Vision Pro to repair a patient’s spine. Neither surgeon donned the ski goggle-esque device themselves, but instead entrusted it to a nurse working alongside them, the Daily Mail has reported.

Nurse Suvi Verho tells the paper that the headset helped her during pre-op, as well as to keep track of where they were in the procedure and choose the right surgical tools. Apple’s technology promises to be a “game changer,” she concluded, adding: “It eliminates human error. It eliminates the guesswork. It gives you confidence in surgery.”

Online reviews and clips of tech enthusiasts sporting the Vision Pro in the wild have been filtering out for over a month. Last month, prominent Florida neurosurgeon Robert Masson and eXeX, a self-proclaimed leader in “mixed-reality enhanced surgical performance,” released photos of Masson actually wearing the headset during a spine surgery. In a press release, Masson announced that the one-and-a-half-pound wearable—which requires using eye-gazing as a mouse pointer and utilizing various air pinches, finger taps, hand drags, and wrist flicks—felt “invisible to me,” and in fact, left him aware of only “the extreme calm, quiet, and surreal effortlessness of the predictable, undistracted workflow of my team.”

Dr. Aftab, the London surgeon, meanwhile argued that the Vision Pro has the potential to turn a nurse he’s not worked with before into a 10-year OR veteran, transforming his entire team into basically a surgical Formula One pit crew: “It doesn’t matter if you’ve never been in a pitstop in your life. You just put the headset on.”

These surgeons have already reached a sense of certainty with the Vision Pro’s OR capabilities. But patients might still have a few questions before they feel as comfortable signing up for a VR headset-assisted surgery.

Like, what if the surgeon suits up in a Vision Pro for a complicated spinal surgery, then encounters one of the software glitches people are reporting— or what Mark Gurman, Bloomberg News’s chief correspondent for all things Apple, has called “the buggiest first-gen Apple product I’ve used”?

One such glitch is the blurry “pass-through” problem that is said to impact the wearer’s real-world awareness. Another is when the right speaker pod, specifically, overheats to the point of being “uncomfortably warm.” Yet another: hand- and eye-tracking becoming, in Verge editor-in-chief Nilay Patel’s words, “inconsistent and frustrating.” And another: Headaches so bad after 10 minutes of use that it’s caused some tech journalists to return their pairs.

Or, perhaps worse, what about the negative effects on the wearer’s efficiency? What if it’s the equivalent of that New York subway rider who apparently needed almost half a minute to type himself a Note that read, “Reminder for tomorrow,” while the outside world observed him using all of these motions:

Or what if it results in the OR equivalent of what happened to Jake Paul’s Ferrari two weeks ago when the YouTuber’s pal backed a golf cart into his $700,000 car while wearing a Vision Pro, leading Paul to ask: “Were you wearing these things? I can’t, I can’t, I can’t. Bro, I hate society.”

Of course, surely patients see some technological promise in Apple’s latest gadget. After all, the first-generation iPods, iPhones, and Apple Watches were released with their fair share of hang-ups, too—if not necessarily ones tied directly to operating on the human spine.

Apple is eager to see the Vision Pro in action during surgery, regardless. It recently put out an official press release teasing ways that the device already “unlocks new opportunities” for precisely these kinds of medical procedures. For now, Apple isn’t yet suggesting that surgeons wear the contraption while operating. But it does brag about how the Vision Pro “seamlessly blends digital content with the physical world, unlocking powerful spatial experiences in an infinite canvas,” and notes, “we can’t wait to see what’s to come.”

ABOUT THE AUTHOR

Clint Rainey is a writer based in New York who has covered the anti-ESG movement and how progressive companies like Starbucks may have lost their way. His articles have appeared in New York Magazine, the New York Times, Newsweek, and the Dallas Morning News, among others.

FASTCOMPANY

03.13.24

Lessons from the Kate Middleton scandal

BY Fast Company 3 MINUTE READ

It’s the scandal that keeps on giving—and has dominated social media discourse in a way few other stories have in years: What the hell is going on with Kate Middleton, Princess of Wales, and wife to the King-in-waiting?

Princess Catherine, to use her official royal name, has been almost never seen in public since the new year. While Kensington Palace released a statement earlier this year saying she had undergone abdominal surgery and would be recovering (i.e., missing) until after Easter, her absence from the public sphere has ignited social media speculation. And into the vacuum of information, conspiracy theories have cropped up.

In an attempt to quell the gossip, on March 10 the palace released a photograph of the princess with her three children—a none-too-subtle sign of life designed to tamp down the most egregious commentary. But, as sleuths pored over the image, finding visual inconsistencies, that backfired spectacularly. Some suggested the princess wasn’t even in the photo. At the same time, press agencies around the world started withdrawing the photo from circulation because it had been doctored. A public statement by the princess admitting to editing the photo didn’t do much to calm the storm, and a follow-up photo designed to show the royal couple together on March 11 was criticized for its poor quality and awkward positioning (the woman in the picture is turned away from the camera, her face obscured).

Now, the reality is most likely that which the palace has put forward: The princess underwent a serious operation, and has been recuperating. But the fact that the controversy could rage for so long is proof that the world we now live in has, thanks to technology, grown even more virulent.

Gemma Milne, a sociologist of technology at the University of Glasgow, says that the Kate Middleton controversy is “a combination of discourses all coming together.” To start with, Milne says that the incident brings “debates around trust in digital media due to generative-AI advancements, leaving us with challenging verification tasks [and] debates about what counts as a ‘real’ image in a time of more explicit-image creation versus the long history of image manipulation, staging, and editing” crashing together in a single moment.

That would be complicated enough in itself. But added to that are “debates about what those whose power is fueled by the public owe said public—think Taylor Swift fandom and the sense of being owed explanations, appearances, access, etcetera; and debates about the role of the Royal Family in a time of change,” she explains. Milne points to the U.K.’s cost-of-living crisis, and the comparative unpopularity of King Charles as the head of the royal household compared to his mother, the late Queen Elizabeth, as a trying time for the monarchy.

Other elements are also at play—most notably, the shadow of the generative-AI revolution under which we’ve all lived for at least the past 18 months. Twice the palace has produced photographic evidence that the princess is happy and well, and twice it’s been discounted as not real. That’s in part because of the impact of generative-AI tools to create lifelike images from a simple text prompt, says synthetic media expert Henry Ajder. “Most people are aware that celebrity photos are heavily edited and airbrushed, and this certainly isn’t the first time we’ve seen badly edited examples cause controversy,” he says, pointing to such examples as Time’s airbrushing of O.J. Simpson’s skin tone on a 1994 cover and Natalie Portman’s advert for Dior mascara that exaggerated the effects of the beauty product.

But suspicion about what is and isn’t real has been heightened by the generative-AI revolution putting the tools to create fakery in the hands of the general public—and without much effort. “Hyperrealistic AI generated content has made some people much more sensitive to what is real and what is AI-generated,” says Ajder. However, while people have started to be conscious of the power of AI, the reality is that it has been present in their tech lives for a long time. “AI features are everywhere, including the computational photography baked into every image taken on modern smartphones,” he explains.

There are a number of issues that the case of the Photoshopped princess highlights; but above all, it helps show how we have entered a new era in which we need to be more suspicious of what we see. It used to be the case that seeing was believing. Not anymore! “This case may have made the headlines, but in trying to answer the question [of what is real and what is AI], it really puts a mirror up to how synthetic our media landscape already is,” says Ajder.

FastCompany

03.08.24

The Future of Marketing material in the Gen AI world, according to IBM

BY Fast Company 2 MINUTE READ

By 2025, three out of four CMOs say their company will be using generative AI for content creation, according to a recent study from IBM’s Institute for Business Value (IBV).

Now, after a year of experimenting and working in beta, IBM itself is publicly releasing its case study for using Adobe’s Firefly generative AI platform in its marketing and advertising content.

For its 2023 “Let’s Create” campaign, IBM put Firefly directly into its work process, using simple text prompts to generate 200 unique advertising assets and over 1,000 marketing variations for the campaign that took moments rather than months. More impressively, the campaign performed well above IBM’s benchmark, driving 26 times higher engagement, and reaching highly valued audiences (20 percent of campaign respondents identified as C-level decision-makers).

“It was a very high performing campaign, and a great use case of the technology because it played to the positive characteristics of Gen AI, and what it’s uniquely capable of doing, which would take a designer a lot of time manually to composite,” says Billy Seabrook, IBM Consulting’s global chief design officer.

Seabrook says that when embedded into other Adobe tools like Photoshop and Illustrator, Firefly was quickly able to dramatically accelerate productivity by speeding up early creative processes like sketching, prototyping ideas, storyboarding concepts, and actually expanding how IBM designers and creatives were brainstorming.

“We saw immediately, from an internal workflow standpoint, how it unleashed the volume of creative ideas that can be rendered quickly, and the acceleration of tedious production tasks like retouching and resizing,” says Seabrook. “And then, it actually expanded the design and ability to create visual work to a much broader audience, equipping a copywriter, for example, to play more in the visual creative process.”

Perhaps the most significant difference between Firefly and other generative AI tools is that it only uses Adobe’s stock image library and appropriate open source material. That’s a limitation compared to tools like ChatGPT and Midjourney, but much more legally sound when it comes to copyright.

“Having the content credentials, the identification, checks a big box for us in terms of legal comfort and using the tool,” says Seabrook. “Our next bit of concern is just the ethics and the bias around the content it generates, and that’s an evolving story. But we think that’s critical for brands to have an appreciation for before they go and launch something, that there needs to be a really good governance model in place to check for ethics and bias.”

In that same IBM IBV report, more than 42% of CMOs said scaling hyper-personalization is a marketing priority, and 64% said they expect to use generative AI for content personalization in the next year or two.

Seabrook sees that potential with Firefly, both in how it integrates and enhances his team’s current way of working and in how it’s evolving in terms of the type and quality of content it’s producing. “We’re creating the building blocks to make that a reality once we feel comfortable with the brand safety around the content,” he says. “I would argue within the year, you’re going to see a lot more campaigns with quality content going out that have been sort of curated properly.”

FastCompany