Rather than flip on the TV when major news-worthy events happen, like Hamas’ attack on Israel on Oct. 7 and the subsequent retaliation by Israeli forces in Gaza, we open up social media to get up-to-the-minute information. However, while television is still bound to regulations that require a modicum of truthful content, social media is a battleground of facts, lies, and deception, where governments, journalists, law enforcement, and activists are on an uneven playing field.
It is a massive understatement to use the term “fog of war” to describe what is happening in discussions of Hamas and Israel on social media. It’s a torrent of true horror, violent pronunciations, sadness, and disinformation. Some have capitalized on this moment to inflame Russia or gain clout by posting video game clips or older images of war recontextualized. Many governments, including the U.S., were shocked that Israeli Intelligence failed to see the land, sea, and air attack. Israel is known for its controversial cyber defense and spyware used to tap into journalists’ and adversaries’ networks. How could this have happened?
It may come as a surprise to some that we are involved in an information war playing out across all social media platforms every day. But it’s one thing to see disinformation, and it’s another to be an active (or unwitting) participant in battle.
Different from individuals, states conduct warfare operations using the DIME model—”diplomacy, information, military, and economics.” Most states do everything they can to inflict pain and confusion on their enemies before deploying the military. In fact, attacks on vectors of information is a well-worn tactic of war and usually are the first target when the charge begins. It’s common for telecom data and communications networks to be routinely monitored by governments, which is why the open data policies of the web are so concerning to many advocates of privacy and human rights.
With the worldwide adoption of social media, more governments are getting involved in low-grade information warfare through the use of cyber troops. According to a study by the Oxford Internet Institute in 2020, cyber troops are “government or political party actors tasked with manipulating public opinion online.” The Oxford research group was able to identify 81 countries with active cyber troop operations utilizing many different strategies to spread false information, including spending millions on online advertising. Importantly, this situation is vastly different from utilizing hacking or other forms of cyber warfare to directly attack opponents or infrastructure. Cyber troops typically utilize social media and the internet as it is designed, while employing social engineering techniques like impersonation, bots, and growth hacking.
Data on cyber troops is still limited because researchers rely heavily on takedown reports by social media companies. But the Oxford researchers were able to identify that, in 2020, Palestine was a target of information operations from Iran on Facebook and Israel was a target of Iran on Twitter, which indicates that disinformation campaigns know no borders. Researchers also noted that Israel developed high-capacity cyber troop operations internally, using tactics like botnets and human accounts to spread pro-government, anti-opposition, and suppress anti-Israel narratives. The content Israel cyber troops produced or engaged with included disinformation campaigns, trolling, amplification of favored narratives, and data-driven strategies to manipulate public opinion on social media.
Of course, there is no match for the cyber troops deployed by the U.S. government and ancillary corporations hired to smear political opponents, foreign governments, and anyone that gets in the way. Even companies like Facebook have employed PR firms to use social media to trash the reputation of competing companies. It’s open warfare—and you’ve likely participated.
As for who runs influence operations online, researchers found evidence of a blurry boundary between government operatives and private firmscontracted to conduct media manipulation campaigns online. This situation suggests that contemporary cyber operations are best characterized as fourth generation warfare, which blurs the lines between civilians and combatants.
It also has called into question the validity of the checks that platforms have built to separate fact from fiction. For instance, a graphic video of the war was posted by Donald Trump Jr.—images which Trump Jr. claimed came from a “source within Israel,”—was flagged as fake through X’s Community Notes fact-checking feature. The problem, though, was that the video was real. This would not be the first time we have seen fact-checkers spread disinformation, as pro-Russian accounts did something similar in 2022.
Time and time again, we have seen social media used to shape public opinion, defame opponents, and leak government documents using tactics that involve deception by creating fake engagement, using search engine optimization, cloaked and imposter accounts, as well as cultural interventions through meme wars. Now more than ever we need politicians to verify what they are saying and arm themselves with facts. Even President Biden was fact-checked on his claim to have seen images of beheaded babies, when he had only read news reports.
Today, as we witness more and more attacks across Israel and Palestine, influential people—politicians, business people, athletes, celebrities, journalists, and folks just like me and you—are embattled in fourth generation warfare using networks of information as a weapon. The networks are key factors here as engagement is what distributes some bytes of information—like viral videos, hashtags, or memes—across vast distances.
If we have all been drafted into this war, here are some things that information scientist and professor Amelia Acker and I developed to gauge if an online post might be disinformation. Ask yourself: Is it a promoted post or ad? This is a shortcut to massive audiences and can be very cheap to go viral. Is there authentic engagement on the post or do all of the replies seem strange or unrelated? If you suspect the account is an imposter, conduct a reverse image search of profile pics and account banners, and look to see if the way-back machine has screenshots of the account from prior months or years. Lastly, to spot spam, view attached media (pictures, videos, links) and look for duplicates and see if this account engages in spam posting, for example, replying to lots of posts with innocuous comments.
While my hope is for peace, we all must bear witness to these atrocities. In times of war, truth needs an advocate.
Formulas may be road-tested approaches to business challenges, but formulas have flaws. What worked yesterday might not be applicable or even plausible today. There are three primary weaknesses to relying on formulas to address business issues in a constantly changing environment: 1.) they don’t work the same in all contexts; 2) they can be replicated by the competition; and 3) they can have hidden risks. To manage a fluctuating business climate, companies need a different toolkit. Instead of relying on static formulas that worked in the past, organizations need to focus on changing the way people think. This requires focusing on refining people’s cognitive skills, so they can better identify, assess, and solve unique problems in unique ways. This article covers three ways that companies can sharpen cognitive skills in their own organizations. Because when we know how to adapt, we can position ourselves for future success in an unknown environment.
The only constant in business is change, and it’s recently accelerated to light-speed. If the rising rates of employee burnoutare any indicator, there are no signs of all this turbulence slowing down. People will continue to face increasing stress and anxiety from persistent uncertainty and ambiguity. To mitigate this barrage, organizations often turn to the familiar — those formulas that have had a track record of success in the past. However, what worked in the past can’t fully address today’s challenges, as they were spawned in a wholly different environment.
For example, one of our financial services clients was facing a chronic decline in membership. The attrition had reached a tipping point and was at risk of moving into a free fall. The sales team was tasked with the turnaround and moved to double the size of the sales force in the field. When membership had dropped before, an influx of new personnel had brought it back up, so it seemed like a reliable move. Yet this formula didn’t produce the anticipated results, as consumers’ engagement preferences had changed to conducting more business online.
Why You Can’t Rely on Former Formulas of Success
Formulas may be road-tested approaches to business challenges, but formulas have flaws. What worked yesterday might not be applicable or even plausible today. There are three primary reasons why you can’t rely on former formulas of success:
1. Formulas don’t work the same in all contexts.
McDonald’s is famous for using a formula of granting geographically nonexclusive licenses to franchisees. Because it is perceived as successful, many new franchisers adopted this same formula. However, research has shown that granting nonexclusive licenses increases the likelihood that a new franchiser will fail. Prospective franchisees fear their business will suffer if another unit of the same brand opens nearby. New franchisers need franchisees to grow, and the non-exclusive formula isn’t well suited to an up-and-coming brand.
Or consider JCPenney. In June 2011, Ron Johnson, the man in charge of Apple’s wildly profitable retail stores and former Target executive, took the helm of the flailing retailer. He focused on implementing the formula he found successful with his previous employers — using constant markdowns, turning stores into destinations filled with branded merchandise, and reducing the number of private-label brands. Sixteen months later, Johnson was fired. Same-store sales fell by 25%, the company recorded a $1 billion loss, and its stock fell 19.72%. What worked at Target and Apple didn’t transpose onto JCPenney because their customers weren’t looking for experiences — they were looking for consistent deals, hard-to-find specialty sizes, and an unpretentious environment with high-quality house brands. Ron Johnson’s formula ended up being the exact opposite of what the JCPenney consumer was seeking.
2. Formulas also have a limited shelf life because they can be replicated by the competition.
When Bill Walsh became head coach of the National Football League’s San Francisco 49ers in 1979, he implemented a formula known as “The West Coast Offense.” With the West Coast offense, Walsh led the 49ers to Super Bowl championships during the 1981, 1984, and 1988 seasons. While the team went on to win two more Super Bowls, the benefits of the West Coast offense declined after other coaches began implementing similar formulas with their teams.
Another example is the famed “Toyota Way,” the legendary management system popularized by Jeffrey K. Liker in his 2004 book. The framework centered on a set of principles around organizational culture and continuous improvement (Kaizen),focusing on the root cause of problems, and engaging in ongoing innovation. This formula enabled Toyota to gain significant market share in the American market between 1986 and 1997. However, since then, competitors including Ford, Honda, General Motors, and Stellantis all have adopted the system, significantly diminishing the competitive advantage the formula afforded.
3. Formulas can have hidden risks.
Research on technology startups in Silicon Valley found the “High-Commitment Management Model,” which focuses on hiring employees based on cultural fit and developing strong emotional bonds with them, is less likely to fail and more likely to ensure the company goes public as compared to startups that used other hiring approaches. However, the same study found that changing the hiring structure after a startup launch triples the likelihood of failure. While this may be an effective model for a small, flat organization, once the company begins to scale, the formula isn’t sustainable. This forces a major change in hiring practices, which in turn, puts the organization at risk.
The Just-In-Time production system was also a unique approach formulated by Toyota. Focused on making manufacturing as efficient as possible, Just-In-Time reduces the waiting time between work in progress procedures and lowers supply chain costs, by delivering raw materials only when needed, rather than holding excess inventory. Yet when the 2020 pandemic hit, manufacturers had little inventory to meet production demand, and no capability to resupply. As a result, global shortages are still reverberating across supply chains and manufacturers today.
To manage a fluctuating business climate, companies need a different toolkit. Instead of relying on static formulas that worked in the past, organizations need to focus on changing the way people think. This requires focusing on refining people’s cognitive skills, so they can better identify, assess, and solve unique problems in unique ways.
How to Sharpen Cognitive Skills in Your Organization
Cognitive skills are the mental processes that allow us to perceive, understand, and analyze information, and are essential for problem-solving, decision-making, and critical thinking. Psychologists Daniel Kahneman and Amos Tversky first researched these higher-order processes in their best-selling book, Thinking, Fast and Slow. They found that cognitive skills — which they deemed “slow” thinking — require more time and energy to effectively evaluate and apply reasoning to a problem. On the other hand, “fast thinking,” is a more automatic and reactive response. Essentially, we need to apply our “slow” thinking skills to make better decisions and more effectively solve complex challenges. Fortunately, cognitive thinking skills can be expanded and improved with practice and training. Here are three basic ways to sharpen your organization’s cognitive skills:
1. Analyze known unknowns
Donald Rumsfeld, George W. Bush’s secretary of defense, became well known for his famous statement, “As we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns — the ones we don’t know we don’t know.”
While Rumsfeld didn’t invent the concept, what’s deemed as the Rumsfeld Matrix is a cognitive method for defining the things you think you know that it turns out you did not know. There are four categories of thinking in this matrix: 1) Known knowns: things we are aware of and understand; 2) Known unknowns: things we are aware of but don’t understand; 3) Unknown knowns: things we understand but are not aware of; and 4) Unknown unknowns: things we are neither aware of nor understand. Using this approach can help to better identify blind spots, false assumptions, and information gaps.
For instance, an organization may know there’s a risk of losing 10% of their customers to a new competitor (known knowns) and can easily manage and quantify the impact. However, they may also know there is a risk that rain may affect business operations, but a lack of knowledge about how much rain will fall (known unknowns). This scenario requires multiple action plans for the most probable outcomes and to be ready to switch to the right plan of action once more information is available.
2. Encourage divergent thinking
Divergent thinking is a thought process used to generate creative ideas by exploring many possible solutions. It involves breaking a problem down into its various components to gain insight about its various components. Done in a spontaneous, free-flowing manner, ideas are generated in a random, unorganized fashion.
An example of divergent thinking would be generating as many uses as possible for a normal, everyday object. For instance, using a coin as a flathead screwdriver, using a fork to dig a hole. By looking at a situation from a unique perspective, it can give rise to a unique solution. Organizations can apply divergent thinking in a variety of ways. This can be something as simple as bringing groups of employees together who normally don’t engage with one another to practicing synetics — the act of stimulating thought processes to uncover alternative ways to overcome obstacles. For instance, instead of tasking employees with finding ways to retain customers, they could be challenged with developing a list of ways to lose them, uncovering new ideas which may never have been considered if approached the typical way.
3. Apply first-principles thinking
First-principles thinking is the idea of breaking down complicated problems into basic elements and then reassembling them from the ground up. Every play we see in the NBA was at some point created by someone who thought, “What would happen if the players did this?” and went out and tested the idea. Since then, thousands, if not millions, of plays have been created.
Coaches reason from first principles. The rules of basketball are the first principles: they govern what you can and can’t do. Everything is possible as long as it’s not against the rules. First-principles thinking allows for keeping better focus on the root components of problems rather than simply reacting to the symptoms.
For instance, Elon Musk, in his mission to transform space travelwith his company SpaceX, tried to buy rockets so he could launch them into orbit. However, the costsof buying a rocket outright were too high to make SpaceX a successful company. Instead, he applied first-principles thinking to boil down a rocket to its most fundamental components and materials. He realized the price of the materials to build a rocket were much lower than buying one outright. In other words, building a rocket ship would make more sense for the business model he was creating.
As is the case with so many things, success is found in moderation. Formulas can be a helpful guide, but complex challenges typically require unique insights and perspectives which only come from applying cognitive thinking skills. Focusing on these skills also helps employees to be more independent learners. If an individual knows how to learn, they will grow abilities and behaviors that are transferable to all kinds of contexts and problems thrown their way, which is inherent to the art of effectively navigating change. When we know how to adapt, we can position ourselves for future success in an unknown environment.
Andrea Belk Olson is a differentiation strategist, speaker, author, and customer-centricity expert. She is the CEO of Pragmadik, a behavioral science driven change agency, and has served as an outside consultant for EY and McKinsey. She is the author of 3 books, a 4-time ADDY® award winner, and contributing author for Entrepreneur Magazine, Rotman Management Magazine,Chief Executive Magazine, and Customer Experience Magazine
In recent years, much attention has been drawn to the potential for social media manipulation to disrupt democratic societies. The U.S. Intelligence Community’s 2023 Annual Threat Assessment predicts that “foreign states’ malign use of digital information … will become more pervasive, automated, targeted … and probably will outpace efforts to protect digital freedoms.” Chinese Communist Party (CCP) disinformation networks are known to have been active since 2019—exploiting political polarization, the COVID-19 pandemic, and other issues and events to support its soft power agenda.
Despite the growing body of publicly available technical evidence demonstrating the threat posed by the CCP’s social media manipulation efforts, there is currently a lack of policy enforcement to target commercial actors that benefit from their involvement in Chinese influence operations (IO). However, there are existing policy options that could address this issue.
The U.S. government can topple the disinformation-for-hire industry through sanctions, enact platform transparency legislation to better document influence operations across social media platforms, and push for action by the Federal Trade Commission (FTC) to counter deceptive business practices, to better address the business of Chinese IO.
There is currently a lack of policy enforcement to target commercial actors that benefit from their involvement in Chinese influence operations.
Commercial entities, from Chinese state-owned enterprises to Western AI companies have had varying degrees of involvement in the business of Chinese influence campaigns. Chinese IO does not occur in a vacuum; it employs various tools and tactics to spread strategically favorable CCP content. For example, as reported in Meta’s Q1 2023 Adversarial Threat Report, Xi’an Tianwendian Network Technology built its own infrastructure for content dissemination by establishing a shell company, running a blog and website which were populated with plagiarized news articles, and fake pages and accounts.
Chinese IO efforts have also utilized Western companies. Synthesia, a UK-based technology company was used to create AI avatars and spread pro-CCP content via a fake news outlet called “Wolf News.” Another example is Shanghai Haixun, a Chinese public relations firm that pushed IO in an online and offline context when it financed two protests in Washington DC in 2022 and then amplified content about those protests on Haixun-controlled social media accounts and fake-media websites.
The role of private companies in Chinese IO can be expected to expand, as they provide sophisticated and tailor-made generative AI services to amplify reach and increase tradecraft. Though the Chinese IO machine is widely known to lack sophistication, it has continued to mature and adapt to technological developments, evidenced by its use of deepfakes and AI-generated content. Most recently, Microsoft’s Threat Analysis Center discovered that a recent Chinese IO campaign was using AI-generated images of popular U.S. symbols (such as the Statue of Liberty) to besmirch American democratic ideals. The use of generative AI will introduce new challenges to counter the business of Chinese IO and the U.S. government needs to act fast to curtail it.
Our first recommendation is for the U.S. government to slowly dismantle the disinformation for-hire industry by calling out the Chinese companies involved and imposing sanctions or financial costs on them. The Chinese government utilizes its gray propaganda machine to conduct overt influence operations through real media channels such as CGTN, Xinhua News, The Global Times and others, and fake accounts to spread content from these media channels in covert influence operations.
With the attribution of IO to specific private entities such as Shanghai Haixun and others, the U.S. government could build a public case against covert Chinese IO and impose financial costs on Chinese companies, especially if they also provide legitimate products and/or services. The U.S. government has the jurisdiction to sanction private entities that directly pose a threat to U.S. national security through the Treasury Department’s Office of Foreign Assets Control (OFAC). There are currently OFAC sanctions in place for Chinese military companies, but not Chinese companies involved in influence operations targeting individuals in the United States.
There is also some potential historical precedent for sanctioning Chinese IO given that it is a type of malicious cyber activity; in 2021 the Biden Administration sanctioned Russian and Russian-affiliated entities involved in “malicious cyber-enabled activities” through an executive order. If the executive branch were to direct a policy focus towards known Chinese entities involved in malign covert influence operations, it could signal a first step toward naming and sanctioning Chinese companies.
Furthermore, by sanctioning these entities, social media companies would be more inclined to remove sanctioned companies’ content from their platforms to avoid liability risks. When the European Union imposed sanctions on the media outlets Russia Today and Sputnik after the recent Russian invasion of Ukraine, Facebook and TikTok complied and removed content from these outlets to avoid liability issues, though they had not taken sweeping action on overt state media before. The U.S. government could use this approach to identify Chinese private companies bolstering IO directed at the American public, name them, and impose transactions costs on them through sanctions.
Holding private sector actors accountable is necessary to impose costs and help dismantle the disinformation business behind Chinese influence operations.
Our second recommendation is to mandate that large social media companies or Very Large Online Platforms (VLOPs) adhere to universal transparency reporting on influence operations and external independent research requirements. Large social media platforms currently face the challenge of deplatforming influence operations at scale, which grants them the ability to choose what to report in the absence of government regulations. Regulation that mandates universal transparency reporting IO would be a meaningful first step toward prodding platforms to devote greater attention to that challenge.
The implementation of this recommendation could prove to be more challenging given that transparency reporting currently operates on a voluntary basis, and the efforts of policymakers could be stymied by First Amendment and Section 230 protections. Recently, a bipartisan group of U.S. Senators proposed the Platform Accountability and Transparency Act in which social media platforms would have to comply with data access requests from external researchers. Any failure in compliance would result in the removal of Section 230 immunity.
Initiatives such as these are essential to promoting platform transparency. If policymakers can mandate transparency reporting on influence operations for VLOPs, including specific parameters of interest: companies involved, number of inauthentic and authentic accounts in the network, generative AI content identified, malicious domains used, political content/narratives, etc., the U.S. government could acquire further insight about the nature of IO at scale. A universal transparency effort could also empower the open source intelligence capabilities of intelligence agencies, result in principled moderation decisions, increase knowledge about the use of generative AI by malign actors, and empower external researchers to investigate all forms of IO.
Our third and last recommendation is for the FTC to continue to pursue and expand its focus on both domestic and foreign companies engaging in deceptive business practices to bolster Chinese influence operations. In 2019, the FTC imposed a fine of $2.5 million on Devumi, a company that engaged in social media fraud by selling fake indicators of influence (retweets, Twitter followers, etc.). Though this action was a helpful first step, it is not likely to be a long-term deterrent for all companies engaged in these harmful practices. The FTC should continue to pursue such cases and work with its international partners via its Office of International Affairs. The challenges of increased FTC involvement are vast; the agency has been under resourced and must choose its cases carefully to achieve maximum impact. However, a sharper FTC focus on the business of Chinese IO could reduce deceptive practices online, protect consumers against the harmful use of generative AI and other technologies, and increase visibility for this issue for social media companies.
Holding private sector actors accountable for Chinese influence operations will not be a straightforward process for the U.S. government, given the need for transparency regulation for social media platforms, the political capital needed for the executive branch to sanction Chinese private entities involved in IO, and FTC’s resource constraints. However, these policy options are necessary to impose costs and help dismantle the disinformation business behind Chinese influence operations.
Bilva Chandra is an adjunct technology and security policy fellow and Lev Navarre Chao was previously a policy analyst at the nonprofit, nonpartisan RAND Corporation.
Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.
Thousands of users across the Defense Department’s “fourth estate” will get their first chance to use modern collaboration tools on classified IT networks over the next several weeks as DoD continues its push to deploy Office 365 across the military departments, Defense agencies and field activities.
The Defense Information Systems Agency has been piloting the new service — called DOD365-Secret — since January. But officials are now fully deploying it for users across the 17 components of the Office of the Secretary of Defense (OSD), mainly in the Pentagon itself and in the nearby Mark Center in Alexandria, Virginia.
It’s a major shift, not only in that it’s one of DoD’s first large-scale forays into cloud computing at the secret level, but also because it will have the effect of consolidating an aging patchwork of tools senior leaders and their support staff have been using to discuss classified information for years, said Danielle Metz, OSD’s chief information officer.
“Over the past 10 to 15 years, those who live on our classified environment to do their mission have had to really figure out how to stitch together some collaboration capabilities using really old-school chat services that aren’t very effective and aren’t well used across the board,” she said during an interview for Federal News Network’s On DoD. “Effectively what this does is it brings everybody together — we’re all on Teams and getting the same collaborative experience where we’re able to do chat, we’re able to do video, we’re able to collaborate on documents all at the same time, we’re able to store it in a cloud-based environment. None of that exists right now on the classified side, but we are at the precipice of having all of this at our fingertips.”
The implementation of DoD365-Secret across those 17 components will be one of the first major accomplishments for Metz’s new office, which marks its first anniversary this month. Prior to that, each of the OSD sub-offices — known as “principal staff assistants” — operated somewhat independently when it came to IT governance and planning. Because of that, the networks they use are still fragmented and complex. Cloud helps solve part of that problem.
“A cloud-based approach allows us to look and feel and act as if we’re on the same environment, because we are — we’re in the cloud,” Metz said. “The networks are still going to be what the networks are, and there are some modernization activities associated with bringing those up to a better standardized and consistent digital experience. But I think we’re showcasing the importance of being able to all be on in the same environment to be able to work more jointly together to collaborate. It reduces the need for the workforce to figure out how to do it themselves — that’s what I don’t want them to do. I want them to use their creativity to actually do their job. Our job is to ensure that they have the right capabilities and tools to do their job better.”
Aside from modernizing and simplifying those networks, other near-term goals for Metz’s new office include updating end-user devices and laying the groundwork for other significant moves to the cloud. In the early days, the focus is on treating the collection of OSD offices as a single IT enterprise and building out common IT services.
“One of the things that we were able to do is to build that governance structure, create an identity, so that we can have a community of practice,” she said. “We’ve also identified a number of PSAs that are the pockets of excellence: forging ahead, failing fast, and pushing the envelope. They’ve been able to figure out what their business processes are to get to the technical makeup of moving to cloud adoption.”
Over the long-term, Metz said OSD will rely heavily on DoD’s new Joint Warfighting Cloud Computing (JWCC) contract — but those task orders will likely be organized along functional lines, once the office is ready to lean in to supporting mission-specific IT needs. For now, the objective is to map out OSD’s cloud requirements and build the support services to help them migrate.
“We want to be able to do something similar to what the Army did with their Enterprise Cloud Management Agency: create a corporate playbook for OSD,” she said. “What I don’t want is for each individual PSA to fail on their own and do it in a vacuum. We want to be able to least standardize what we think the business processes are, to help inform the technical processes to determine which systems and workloads need to be moved to a targeted cloud environment … The other side of the coin that we’ve struggled with for OSD is that we don’t have an authorizing official (AO) for cloud, which makes it extremely difficult to do anything. And so we’re working on testing out and piloting AO as a service. That and some other basic elements need to be in place and available to OSD in order for us to even start moving the needle for cloud adoption.”
Jared Serbu is deputy editor of Federal News Network and reports on the Defense Department’s contracting, legislative, workforce and IT issues.
This CISA-NSA guidance reveals concerning gaps and deficits in the multifactor authentication and Single Sign-On industry and calls for vendors to make investments and take additional steps.
The National Security Agency and the Cybersecurity and Infrastructure Security Agency published on October 4, 2023, a document titled Identity and Access Management: Developer and Vendor Challenges. This new IAM CISA-NSA guidance focuses on the challenges and tech gaps that are limiting the adoption and secure employment of multifactor authentication and Single Sign-On technologies within organizations.
The document was authored by a panel of public-private cross-sector partnerships working under the CISA-NSA-led Enduring Security Framework. The ESF is tasked with investigating critical infrastructure risks and national security systems. The guidance builds on their previous report, Identity and Access Management Recommended Best Practices Guide for Administrators.
In an email interview with TechRepublic, Jake Williams, faculty member at IANS Research and former NSA offensive hacker, said, “The publication (it’s hard to call it guidance) highlights the challenges with comparing the features provided by vendors. CISA seems to be putting vendors on notice that they want vendors to be clear about what standards they do and don’t support in their products, especially when a vendor only supports portions of a given standard.”
IAM-related challenges and gaps affecting vendors and developers
The CISA-NSA document detailed the technical challenges related to IAM affecting developers and vendors. Specifically looking into the deployment of multifactor authentication and Single-Sign-On, the report highlights different gaps.
Definitions and policy
According to CISA and the NSA, the definitions and policies of the different variations of MFAs are unclear and confusing. The report notes there is a need for clarity to drive interoperability and standardization of different types of MFA systems. This is impacting the abilities of companies and developers to make better-informed decisions on which IAM solutions they should integrate into their environments.
Lack of clarity regarding MFA security properties
The CISA-NSA report notes that vendors are not offering clear definitions when it comes to the level of security that different types of MFAs provide, as not all MFAs offer the same security.
For example, SMS MFA are more vulnerable than hardware storage MFA technologies, while some MFA are resistant to phishing — such as those based on public key infrastructure or FIDO — while others are not.
Lack of understanding leading to integration deficits
The CISA and NSA say that the architectures for leveraging open standard-based SSO together with legacy applications are not always widely understood. The report calls for the creation of a shared, open-source repository of open standards-based modules and patterns to solve these integration challenges to aid in adoption.
SSO features and pricing plans
SSO capabilities are often bundled with other high-end enterprise features, making them inaccessible to small and medium organizations. The solution to this challenge would require vendors to include organizational SSOs in pricing plans that include all types of businesses, regardless of size.
MFA governance and workers
Another main gap area identified is MFA governance integrity over time as workers join or leave organizations. The process known as “credential lifecycle management” often lacks available MFA solutions, the CISA-NSA report stated.
The overall confusion regarding MFA and SSO, lack of specifics and standards and gaps in support and available technologies, are all affecting the security of companies that have to deploy IAM systems with the information and services that are available to them.
“An often-bewildering list of options is available to be combined in complicated ways to support diverse requirements,” the report noted. “Vendors could offer a set of predefined default configurations, that are pre-validated end to end for defined use cases.”
Key takeaways from the CISA-NSA’s IAM report
Williams told TechRepublic that the biggest takeaway from this new publication is that IAM is extremely complex.
“There’s little for most organizations to do themselves,” Williams said, referring to the new CISA-NSA guidance. “This (document) is targeted at vendors and will certainly be a welcome change for CISOs trying to perform apples-to-apples comparisons of products.”
Deploying hardware security modules
Williams said another key takeaway is the acknowledgment that some applications will require users to implement hardware security modules to achieve acceptable security. HSMs are usually plug-in cards or external devices that connect to computers or other devices. These security devices protect cryptographic keys, perform encryption and decryption and create and verify digital signatures. HSMs are considered a robust authentication technology, typically used by banks, financial institutions, healthcare providers, government agencies and online retailers.
“In many deployment contexts, HSMs can protect the keys from disclosure in a system memory dump,” Williams said. “This is what led to highly sensitive keys being stolen from Microsoft by Chinese threat actors, ultimately leading to the compromise of State Department email.”
“CISA raises this in the context of usability vs. security, but it’s worth noting that nothing short of an HSM will adequately meet many high-security requirements for key management,” Williams warns.
Conclusions and key recommendations for vendors
The CISA-NSA document ends with a detailed section of key recommendations for vendors, which as Williams says, “puts them on notice” as to what issues they need to address. Williams highlighted the need for standardizing the terminology used so it’s clear what a vendor supports.
Chad McDonald, chief information security officer of Radiant Logic, also talked to TechRepublic via email and agreed with Williams. Radiant Logic is a U.S.-based company that focuses on solutions for identity data unification and integration, helping organizations manage, use and govern identity data.
“Modern-day workforce authentication can no longer fit one certain mold,” McDonald said. “Enterprises, especially those with employees coming from various networks and locations, require tools that allow for complex provisioning and do not limit users in their access to needed resources.”
For this to happen, a collaborative approach amongst all solutions is essential, added McDonald. “Several of CISA’s recommendations for vendors and developers not only push for a collaborative approach but are incredibly feasible and actionable.”
McDonald said the industry would welcome standard MFA terminology to allow equitable comparison of products, the prioritization of user-friendly MFA solutions for both mobile and desktop platforms to drive wider adoption and the implementation of broader support for and development of identity standards in the enterprise ecosystem.
Recommendations for vendors
Create standard MFA terminology Regarding the use of ambiguous MFA terminology, the report recommended creating standard MFA terminology that provides clear, interoperable and standardized definitions and policies allowing organizations to make value comparisons and integrate these solutions into their environment.
Create phishing-resistant authenticators and then standardize their adoption In response to the lack of clarity on the security properties that certain MFA implementations provide, CISA and NSA recommended additional investment by the vendor community to create phishing-resistant authenticators to provide greater defense against sophisticated attacks.
The report also concludes that simplifying and standardizing the security properties of MFA and phishing-resistant authenticators, including their form factors embedded into operating systems, “would greatly enhance the market.” CISA and NSA called for more investment to support high-assurance MFA implementations for enterprise use. These investments should be designed in a user-friendly flow, on both mobile and desktop platforms, to promote higher MFA adoption.
Develop more secure enrollment tooling Regarding governance and self-enrollment, the report said it’s necessary to develop more secure enrollment tooling to support the complex provisioning needs of large organizations. These tools should also automatically discover and purge enrollment MFA authenticators that have not been used in a particular period of time or whose usage is not normal.
“Vendors have a real opportunity to lead the industry and build trust with product consumers with additional investments to bring such phishing-resistant authenticators to more use cases, as well as simplifying and further standardizing their adoption, including in form factors embedded into operating systems, would greatly enhance the market,” stated the CISA and the NSA.
We’re in a very strange moment for the internet. We all know it’s broken. That’s not news. But there’s something in the air—a vibe shift, a sense that things are about to change. For the first time in years, it feels as though something truly new and different might be happening with the way we communicate online. The stranglehold that the big social platforms have had on us for the last decade is weakening. The question is: What do we want to come next?
There’s a sort of common wisdom that the internet is irredeemably bad, toxic, a rash of “hellsites” to be avoided. That social platforms, hungry to profit off your data, opened a Pandora’s box that cannot be closed. Indeed, there are truly awful things that happen on the internet, things that make it especially toxic for people from groups disproportionately targeted with online harassment and abuse. Profit motives led platforms to ignore abuse too often, and they also enabled the spread of misinformation, the decline of local news, the rise of hyperpartisanship, and entirely new forms of bullying and bad behavior. All of that is true, and it barely scratches the surface.
We’re in a very strange moment for the internet. We all know it’s broken. That’s not news. But there’s something in the air—a vibe shift, a sense that things are about to change. For the first time in years, it feels as though something truly new and different might be happening with the way we communicate online. The stranglehold that the big social platforms have had on us for the last decade is weakening. The question is: What do we want to come next?
There’s a sort of common wisdom that the internet is irredeemably bad, toxic, a rash of “hellsites” to be avoided. That social platforms, hungry to profit off your data, opened a Pandora’s box that cannot be closed. Indeed, there are truly awful things that happen on the internet, things that make it especially toxic for people from groups disproportionately targeted with online harassment and abuse. Profit motives led platforms to ignore abuse too often, and they also enabled the spread of misinformation, the decline of local news, the rise of hyperpartisanship, and entirely new forms of bullying and bad behavior. All of that is true, and it barely scratches the surface.
But the internet has also provided a haven for marginalized groups and a place for support, advocacy, and community. It offers information at times of crisis. It can connect you with long-lost friends. It can make you laugh. It can send you a pizza. It’s duality, good and bad, and I refuse to toss out the dancing-baby GIF with the tubgirl-dot-png bathwater. The internet is worth fighting for because despite all the misery, there’s still so much good to be found there. And yet, fixing online discourse is the definition of a hard problem. But look. Don’t worry. I have an idea.
What is the internet and why is it following me around?
To cure the patient, first we must identify the disease.
When we talk about fixing the internet, we’re not referring to the physical and digital network infrastructure: the protocols, the exchanges, the cables, and even the satellites themselves are mostly okay. (There are problems with some of that stuff, to be sure. But that’s an entirely other issue—even if both do involve Elon Musk.) “The internet” we’re talking about refers to the popular kinds of communication platforms that host discussions and that you probably engage with in some form on your phone.
Some of these are massive: Facebook, Instagram, YouTube, Twitter, TikTok, X. You almost certainly have an account on at least one of these; maybe you’re an active poster, maybe you just flip through your friends’ vacation photos while on the john.
The internet is good things. It’s Keyboard Cat, Double Rainbow. It’s personal blogs and LiveJournals. It’s the distracted-girlfriend meme and a subreddit for “What is this bug?”
Although the exact nature of what we see on those platforms can vary widely from person to person, they mediate content delivery in universally similar ways that are aligned with their business objectives. A teenager in Indonesia may not see the same images on Instagram that I do, but the experience is roughly the same: we scroll through some photos from friends or family, maybe see some memes or celebrity posts; the feed turns into Reels; we watch a few videos, maybe reply to a friend’s Story or send some messages. Even though the actual content may be very different, we probably react to it in much the same way, and that’s by design.
The internet also exists outside these big platforms; it’s blogs, message boards, newsletters and other media sites. It’s podcasts and Discord chatrooms and iMessage groups. These will offer more individualized experiences that may be wildly different from person to person. They often exist in a sort of parasitic symbiosis with the big, dominant players, feeding off each other’s content, algorithms, and audience.
Large language models are full of security vulnerabilities, yet they’re being embedded into tech products on a vast scale.
The internet is good things. For me, it’s things I love, like Keyboard Cat and Double Rainbow. It’s personal blogs and LiveJournals; it’s AIM away messages and MySpace top 8s. It’s the distracted-girlfriend meme and a subreddit for “What is this bug?” It is a famous thread on a bodybuilding forum where meatheads argue about how many days are in a week. For others, it’s Call of Duty memes and the mindless entertainment of YouTubers like Mr. Beast, or a place to find the highly specific kind of ASMR video they never knew they wanted. It’s an anonymous supportive community for abuse victims, or laughing at Black Twitter’s memes about the Montgomery boat brawl, or trying new makeup techniques you learned on TikTok.
It’s also very bad things: 4chan and the Daily Stormer, revenge porn, fake news sites, racism on Reddit, eating disorder inspiration on Instagram, bullying, adults messaging kids on Roblox, harassment, scams, spam, incels, and increasingly needing to figure out if something is real or AI.
The bad things transcend mere rudeness or trolling. There is an epidemic of sadness, of loneliness, of meanness, that seems to self-reinforce in many online spaces. In some cases, it is truly life and death. The internet is where the next mass shooter is currently getting his ideas from the last mass shooter, who got them from the one before that, who got them from some of the earliest websites online. It’s an exhortation to genocide in a country where Facebook employed too few moderators who spoke the local language because it had prioritized growth over safety.
The existential problem is that both the best and worst parts of the internet exist for the same set of reasons, were developed with many of the same resources, and often grew in conjunction with each other. So where did the sickness come from? How did the internet get so … nasty? To untangle this, we have to go back to the early days of online discourse.
It’s also very bad things: 4chan and the Daily Stormer, revenge porn, fake news sites, racism on Reddit, eating disorder inspiration on Instagram, bullying, adults messaging kids on Roblox, harassment, scams, spam, incels.
The internet’s original sin was an insistence on freedom: it was made to be free, in many senses of the word. The internet wasn’t initially set up for profit; it grew out of a communications medium intended for the military and academics (some in the military wanted to limit Arpanet to defense use as late as the early 1980s). When it grew in popularity along with desktop computers, Usenet and other popular early internet applications were still largely used on university campuses with network access. Users would grumble that each September their message boards would be flooded with newbies, until eventually the “eternal September”—a constant flow of new users—arrived in the mid-’90s with the explosion of home internet access.
When the internet began to be built out commercially in the 1990s, its culture was, perversely, anticommercial. Many of the leading internet thinkers of the day belonged to a cohort of AdBusters-reading Gen Xers and antiestablishment Boomers. They were passionate about making software open source. Their very mantra was “Information wants to be free”—a phrase attributed to Stewart Brand, the founder of the Whole Earth Catalog and the pioneering internet community the WELL. This ethos also extended to a passion for freedom of speech, and a sense of responsibility to protect it.
It just so happened that those people were quite often affluent white men in California, whose perspective failed to predict the dark side of the free-speech, free-access havens they were creating. (In fairness, who would have imagined that the end result of those early discussions would be Russian disinformation campaigns targeting Black Lives Matter? But I digress.)
The culture of free demanded a business model that could support it. And that was advertising. Through the 1990s and even into the early ’00s, advertising on the internet was an uneasy but tolerable trade-off. Early advertising was often ugly and annoying: spam emails for penis enlargement pills, badly designed banners, and (shudder) pop-up ads. It was crass but allowed the nice parts of the internet—message boards, blogs, and news sites—to be accessible to anyone with a connection.
But advertising and the internet are like that small submersible sent to explore the Titanic: the carbon fiber works very efficiently, until you apply enough pressure. Then the whole thing implodes.
Targeted advertising and the commodification of attention
In 1999, the ad company DoubleClick was planning to combine personal data with tracking cookies to follow people around the web so it could target its ads more effectively. This changed what people thought was possible. It turned the cookie, originally a neutral technology for storing Web data locally on users’ computers, into something used for tracking individuals across the internet for the purpose of monetizing them.
To the netizens of the turn of the century, this was an abomination. And after a complaint was filed with the US Federal Trade Commission, DoubleClick dialed back the specifics of its plans. But the idea of advertising based on personal profiles took hold. It was the beginning of the era of targeted advertising, and with it, the modern internet. Google bought DoubleClick for $3.1 billion in 2008. That year, Google’s revenue from advertising was $21 billion. Last year, Google parent company Alphabet took in $224.4 billion in revenue from advertising.
Our modern internet is built on highly targeted advertising using our personal data. That is what makes it free. The social platforms, most digital publishers, Google—all run on ad revenue. For the social platforms and Google, their business model is to deliver highly sophisticated targeted ads. (And business is good: in addition to Google’s billions, Meta took in $116 billion in revenue for 2022. Nearly half the people living on planet Earth are monthly active users of a Meta-owned product.) Meanwhile, the sheer extent of the personal data we happily hand over to them in exchange for using their services for free would make people from the year 2000 drop their flip phones in shock.
And that targeting process is shockingly good at figuring out who you are and what you are interested in. It’s targeting that makes people think their phones are listening in on their conversations; in reality, it’s more that the data trails we leave behind become road maps to our brains.
Plus: A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
When we think of what’s most obviously broken about the internet—harassment and abuse; its role in the rise of political extremism, polarization, and the spread of misinformation; the harmful effects of Instagram on the mental health of teenage girls—the connection to advertising may not seem immediate. And in fact, advertising can sometimes have a mitigating effect: Coca-Cola doesn’t want to run ads next to Nazis, so platforms develop mechanisms to keep them away.
But online advertising demands attention above all else, and it has ultimately enabled and nurtured all the worst of the worst kinds of stuff. Social platforms were incentivized to grow their user base and attract as many eyeballs as possible for as long as possible to serve ever more ads. Or, more accurately, to serve ever more you to advertisers. To accomplish this, the platforms have designed algorithms to keep us scrolling and clicking, the result of which has played into some of humanity’s worst inclinations.
In 2018, Facebook tweaked its algorithms to favor more “meaningful social interactions.” It was a move meant to encourage users to interact more with each other and ultimately keep their eyeballs glued to News Feed, but it resulted in people’s feeds being taken over by divisive content. Publishers began optimizing for outrage, because that was the type of content that generated lots of interactions.
On YouTube, where “watch time” was prioritized over view counts, algorithms recommended and ran videos in an endless stream. And in their quest to sate attention, these algorithms frequently led people down ever more labyrinthine corridors to the conspiratorial realms of flat-earth truthers, QAnon, and their ilk. Algorithms on Instagram’s Discover page are designed to keep us scrolling (and spending) even after we’ve exhausted our friends’ content, often by promoting popular aesthetics whether or not the user had previously been interested. The Wall Street Journal reported in 2021 that Instagram had long understood it was harming the mental health of teenage girls through content about body image and eating disorders, but ignored those reports. Keep ’em scrolling.
There is an argument that the big platforms are merely giving us what we wanted. Anil Dash, a tech entrepreneur and blogging pioneer who worked at SixApart, the company that developed the blog software Movable Type, remembers a backlash when his company started charging for its services in the mid-’00s. “People were like, ‘You’re charging money for something on the internet? That’s disgusting!’” he told MIT Technology Review. “The shift from that to, like, If you’re not paying for the product, you’re the product … I think if we had come up with that phrase sooner, then the whole thing would have been different. The whole social media era would have been different.”
The big platforms’ focus on engagement at all costs made them ripe for exploitation. Twitter became a “honeypot for a**holes” where trolls from places like 4chan found an effective forum for coordinated harassment. Gamergate started in swampier waters like Reddit and 4chan, but it played out on Twitter, where swarms of accounts would lash out at the chosen targets, generally female video-game critics. Trolls also discovered that Twitter could be gamed to get vile phrases to trend: in 2013, 4chan accomplished this with#cuttingforbieber, falsely claiming to represent teenagers engaging in self-harm for the pop singer. Platform dynamics created such a target-rich environment that intelligence services from Russia, China, and Iran—among others—use them to sow political division and disinformation to this day.
“Humans were never meant to exist in a society that contains 2 billion individuals,” says Yoel Roth, a technology policy fellow at UC Berkeley and former head of trust and safety for Twitter. “And if you consider that Instagram is a society in some twisted definition, we have tasked a company with governing a society bigger than any that has ever existed in the course of human history. Of course they’re going to fail.”
How to fix it
Here’s the good news. We’re in a rare moment when a shift just may be possible; the previously intractable and permanent-seeming systems and platforms are showing that they can be changed and moved, and something new could actually grow.
One positive sign is the growing understanding that sometimes … you have to pay for stuff. And indeed, people are paying individual creators and publishers on platforms such as Substack, Patreon, and Twitch. Meanwhile, the freemium model that YouTube Premium, Spotify, and Hulu explored proves (some) people are willing to shell out for ad-free experiences. A world where only the people who can afford to pay $9.99 a month to ransom back their time and attention from crappy ads isn’t ideal, but at least it demonstrates that a different model will work.
Another thing to be optimistic about (although time will tell if it actually catches on) is federation—a more decentralized version of social networking. Federated networks like Mastodon, Bluesky, and Meta’s Threads are all just Twitter clones on their surface—a feed of short text posts—but they’re also all designed to offer various forms of interoperability. Basically, where your current social media account and data exist in a walled garden controlled entirely by one company, you could be on Threads and follow posts from someone you like on Mastodon—or at least Meta says that’s coming. (Many—including internet pioneer Richard Stallman, who has a page on his personal website devoted to “Why you should not be used by Threads”—have expressed skepticism of Meta’s intentions and promises.) Even better, it enables more granular moderation. Again, X (the website formerly known as Twitter) provides a good example of what can go wrong when one person, in this case Elon Musk, has too much power in making moderation decisions—something federated networks and the so-called “fediverse” could solve.
The big idea is that in a future where social media is more decentralized, users will be able to easily switch networks without losing their content and followings. “As an individual, if you see [hate speech], you can just leave, and you’re not leaving your entire community—your entire online life—behind. You can just move to another server and migrate all your contacts, and it should be okay,” says Paige Collings, a senior speech and privacy advocate at the Electronic Frontier Foundation. “And I think that’s probably where we have a lot of opportunity to get it right.”
Europe’s big tech bill is coming to fruition. Here’s what you need to know.
There’s a lot of upside to this, but Collings is still wary. “I fear that while we have an amazing opportunity,” she says, “unless there’s an intentional effort to make sure that what happened on Web2 does not happen on Web3, I don’t see how it will not just perpetuate the same things.”
Federation and more competition among new apps and platforms provide a chance for different communities to create the kinds of privacy and moderation they want, rather than following top-down content moderation policies created at headquarters in San Francisco that are often explicitly mandated not to mess with engagement. Yoel Roth’s dream scenario would be that in a world of smaller social networks, trust and safety could be handled by third-party companies that specialize in it, so social networks wouldn’t have to create their own policies and moderation tactics from scratch each time.
The tunnel-vision focus on growth created bad incentives in the social media age. It made people realize that if you wanted to make money, you needed a massive audience, and that the way to get a massive audience was often by behaving badly. The new form of the internet needs to find a way to make money without pandering for attention. There are some promising new gestures toward changing those incentives already. Threads doesn’t show the repost count on posts, for example—a simple tweak that makes a big difference because it doesn’t incentivize virality.
We, the internet users, also need to learn to recalibrate our expectations and our behavior online. We need to learn to appreciate areas of the internet that are small, like a new Mastodon server or Discord or blog. We need to trust in the power of “1,000 true fans”over cheaply amassed millions.
Anil Dash has been repeating the same thing over and over for years now: that people should buy their own domains, start their own blogs, own their own stuff. And sure, these fixes require a technical and financial ability that many people do not possess. But with the move to federation (which at least provides control, if not ownership) and smaller spaces, it seems possible that we’re actually going to see some of those shifts away from big-platform-mediated communication start to happen.
“There’s a systemic change that is happening right now that’s bigger,” he says. “You have to have a little bit of perspective of life pre-Facebook to sort of say, Oh, actually, some of these things are just arbitrary. They’re not intrinsic to the internet.”
The fix for the internet isn’t to shut down Facebook or log off or go outside and touch grass. The solution to the internet is more internet: more apps, more spaces to go, more money sloshing around to fund more good things in more variety, more people engaging thoughtfully in places they like. More utility, more voices, more joy.
My toxic trait is I can’t shake that naïve optimism of the early internet. Mistakes were made, a lot of things went sideways, and there have undeniably been a lot of pain and misery and bad things that came from the social era. The mistake now would be not to learn from them.
Katie Notopoulos is a writer who lives in Connecticut. She’s written for BuzzFeed News, Fast Company, GQ, and Columbia Journalism Review.
IBM Research Global Internship Program applications are now open to intern with IBM Quantum the summer of 2024.
At IBM Quantum, we’re bringing useful quantum computing to the world. This technology is widely expected to solve valuable problems that are unsolvable using any known methods on classical supercomputers. And quantum summer internships, as part of the IBM Research Global Internship Program, are perhaps the most valuable in the field. Every intern working in quantum makes meaningful contributions to the IBM Quantum Development Roadmap — pushing the field of quantum computing forward in the process.
We have directly trained more than 400 interns at all levels of higher education since 2020, many of whom have gone on to work at IBM Quantum or elsewhere in the field of quantum after graduation. Interns have the opportunity to work directly with researchers, developers, and business experts working to advance the field of quantum computing. Our interns have researched quantum applications, and design hardware, developed open-source projects with Qiskit, carried out market research, and more.
We are hiring software developer, hardware engineer, and research scientist interns for the summer of 2024. Interns in the US will work at either the Thomas J. Watson Research Center in Yorktown Heights, New York, or at IBM Research — Almaden in San Jose, California from either May 20, 2024 to August 9, 2024, or from June 17, 2024 to September 6, 2024. International internship opportunities will be added to this article, soon. See the full list of roles and links below to apply.
The internship experience
Internships with IBM Quantum prepare students with the skills, networks, and career paths needed to launch their careers in the field of quantum. In previous years, the IBM Quantum internship program has included the Qiskit Global Summer School, poster sessions, and a fireside chat with IBM Fellow and Vice President of IBM Quantum, Jay Gambetta, hosted and organized by IBM Quantum interns.
Arian Noori, a University of Wisconsin graduate student and quantum hardware engineering intern who worked on optimizing cryogenic qubit control transmission lines for improved signal delivery to a quantum chip, said about his experience interning at IBM Research:
“Not only did I acquire invaluable practical and technical skills, but my mentors also instilled in me new and intuitive ways of approaching engineering and physics problems. The IBM community is remarkably friendly and open. I was surrounded by some of the most intellectual individuals in the world, and everyone was delighted to share insights into their projects.
“This exposure allowed me to better conceptualize the entire quantum computing ecosystem, enabling a deeper understanding of the most pressing challenges in the field.”
Columbia University undergraduate and quantum software intern Danielle Odigie said about her IBM Research internship experience: “I feel like this summer was the summer where I started feeling like an actual software engineer! It was so fulfilling and so cool to be able to aid in the efforts to create software that connects programmers to such powerful technology.”
And Dhruv Srinivasan, University of Maryland undergraduate and quantum hardware intern learned about the bring-up of quantum computers, and the “many facets of the quantum stack, where I worked on both the room-temperature electronics cooling, as well as the calibration of the amplification of a chain of qubits.”
For more advice from previous interns, take a look at last year’s blog. Though familiarity with quantum computing is not required, we suggest candidates consider getting acquainted with with Qiskit. We have also revamped the IBM Quantum Learning platform, making it easier than ever to hone your quantum computing skills and make yourself a more competitive candidate. Check out the IBM Quantum Learning platform.
2024 internship openings
We look forward to hearing from you. And for those outside the United States, we will share updates on internship opportunities outside of the US in the near future. Follow IBM Quantum on LinkedIn for updates.
Today, CISA, the Federal Bureau of Investigation, the National Security Agency, and the U.S. Department of the Treasury released guidance on improving the security of open source software (OSS) in operational technology (OT) and industrial control systems (ICS). In alignment with CISA’s recently released Open Source Security Roadmap, the guidanceprovides recommendations to OT/ICS organizations on:
Supporting OSS development and maintenance,
Managing and patching vulnerabilities in OT/ICS environments, and
Using the Cross-Sector Cybersecurity Performance Goals (CPGs) as a common framework for adopting key cybersecurity best practices in relation to OSS.
Alongside the guidance, CISA published the Securing OSS in OT web page, which details the Joint Cyber Defense Collaborative (JCDC) OSS planning initiative, a priority within the JCDC 2023 Planning Agenda. The initiative will support collaboration between the public and private sectors—including the OSS community—to better understand and secure OSS use in OT/ICS, which will strengthen defense against OT/ICS cyber threats.
CISA encourages OT/ICS organizations to review this guidance and implement its recommendations.
Google on Tuesday announced the ability for all users to set up passkeys by default, five months after it rolled out support for the FIDO Alliance-backed passwordless standard for Google Accounts on all platforms.
“This means the next time you sign in to your account, you’ll start seeing prompts to create and use passkeys, simplifying your future sign-ins,” Google’s Sriram Karra and Christiaan Brand said.
“It also means you’ll see the ‘skip password when possible‘ option toggled on in your Google Account settings.”
Passkeys are a new form of authentication that entirely eliminate the need for usernames and passwords, or even provide any additional authentication factor.
In other words, it’s a passwordless login mechanism that leverages public-key cryptography to authenticate users’ access to websites and apps, with the private key saved securely in the device and the public key stored in the server.
Each passkey is unique and bound to a username and a specific service, meaning a user will have at least as many passkeys as they have accounts, although there can be multiple passkeys per account since passkeys function only within the confines of the same platform.
A user can, therefore, have one passkey each for a website for Android, iOS, macOS, and Windows.
Thus, when a user signs into a website or app that supports passkeys, a random challenge is created and sent to the client, which, in turn, prompts the individual to verify using their biometric or a PIN in order to sign the challenge using the private key and send it back to the server.
Authentication is considered successful if the signed response can be validated using the associated public key.
An immediate benefit to passkeys is two-fold: they not only obviate the hassle of remembering passwords, but are also phishing-resistant, thereby safeguarding accounts against potential takeover attacks.
The development comes weeks after Microsoft officially began supporting passkeys in Windows 11 for improved account security. Other widely-used platforms like eBay and Uber have enabled passkey support in recent months.