After a year of massive cuts, the tech job market is so unstable that the US government has come to be seen as an appealing, innovative employer.
Tech companies have laid off some 400,000 people worldwide in 2022 and 2023, according to Layoffs.fyi, a site that tracks tech industry job losses. With the market yet to right itself, and some people reexamining the role big tech firms play in society, public sector roles, complete with perks like pensions and a warm, fuzzy do-good feeling, are suddenly proving popular.
“This is a great nexus point where the need and capacity is out there,” says Keith Wilson, the talent engagement manager with US Digital Response, a nonprofit that helps governments with digital expertise. “We’re trying to help these state and local governments learn how to hire better for technical roles.”
Case in point: The US Department of Veterans Affairs, which has hired 1,068 people into tech jobs over the past year, meeting its hiring goal, says Nathan Tierney, chief people officer for the department. To do so, the agency adjusted pay to narrow the gap between government and private sector roles, resulting in an average salary increase of $18,000—and nearly all workers across the department got raises.
It also reworked its application and recruiting strategies; rather than wait for workers to come to the hiring website, it went to find them at LinkedIn Live events and conferences. The department also advertises remote roles, and it is setting up hubs for workers in cities where tech workers congregate, like Seattle, Austin, and Charlotte. “I want to hire highly skilled folks,” Tierney says. “We have an opportunity to capitalize on that.”
There’s a lot of work to do. Red tape and slow processes shroud government work. And keeping pace with the private sector, where hiring strategies and salaries move fast, has traditionally been hard for governments. Then, once hired, those employees may face similar roadblocks when it comes to innovating in their jobs. Still, there’s movement by local and federal US government branches to bring in new talent
In 2021, US president Joe Biden signed a $1 trillion infrastructure law. It included $1 billion in cybersecurity grants for state and local governments, along with additional money for federal agencies to spend on cybersecurity. This influx of cash comes as the tech sector slumps
And interest in government jobs among tech workers remains strong. In late October, more than 3,000 people registered for a Tech to Gov career event, held by the Tech Talent Project, a nonprofit that helps the US government recruit for tech roles. One thousand more had signed up for a waiting list.
“It’s not just layoffs—what I have definitely seen is folks pausing in the tech sector,” says Jennifer Anastasoff, executive director at Tech Talent Project. “This has been a moment where folks have started pausing and started thinking about where they can make the most difference.”
A federal tech job portal had 107 openings as of mid-November. The salaries range from around $40,000 to nearly $240,00. The Office of Personnel Management, the human resources arm of the federal government, made a pitch to laid-off tech workers earlier this year, hoping to scoop up some 22,000 people into public sector tech roles. That office did not respond to emails seeking updates on the hiring process for tech jobs. But smaller government agencies around the country have made strides in luring high-profile private sector workers.
New York recently hired a former high-ranking employee from Blue Cross Blue Shield of Massachusetts to serve as the state’s first chief customer experience officer. Shelby Switzer took a job as the director of Baltimore’s new Digital Services Team earlier this year. Three new employees were hired underneath Switzer—all from the private sector. The group’s first project was to modernize permitting; instead of going to several offices in person to obtain permits for events and street closures, people can now apply online. It seems simple, but for the local government, that’s a huge deal.
One of those benefits came in hiring a UX designer, says Switzer. “Having somebody who is the expert in thinking about the usability of services in technology is just totally new.” But working in government can mean one tech team is trying to innovate while stuck in a bigger, slow-moving pool. “There is a ton of organizational inertia,” Switzer says. “Government wasn’t really designed to be efficient.”
These kinds of small changes are hard to come by in government, but there’s a trend to more cities and states making investments in tech infrastructure. In early November, in Pennsylvania, the Commonwealth Office of Digital Experience, or CODE PA, launched a system that lets residents, businesses, charities, and schools look up if they are eligible for a refund after paying for a permit, license, or certification, and then request a refund.
Pennsylvania is investing big in tech and AI under Josh Shapiro, its new governor. It hired Amaya Capellán, who moved from Comcast to the Pennsylvania government this year, trading corporate life for the role of Pennsylvania’s chief information officer. Some initial priorities for Capellán include finding ways for governments to use generative AI and updating permitting and licensing.
Capellán says people may be realizing that tech companies are treating them as replaceable, pushing them to reconsider roles in tech. “It’s really inspiring to think about the kind of ways you can affect people’s lives for good.”
For the first time, a team of Princeton physicists have been able to link together individual molecules into special states that are quantum mechanically “entangled.” In these bizarre states, the molecules remain correlated with each other—and can interact simultaneously—even if they are miles apart, or indeed, even if they occupy opposite ends of the universe. This research was recently published in the journal Science.
“This is a breakthrough in the world of molecules because of the fundamental importance of quantum entanglement,” said Lawrence Cheuk, assistant professor of physics at Princeton University and the senior author of the paper. “But it is also a breakthrough for practical applications because entangled molecules can be the building blocks for many future applications.”
These include, for example, quantum computers that can solve certain problems much faster than conventional computers, quantum simulators that can model complex materials whose behaviors are difficult to model, and quantum sensors that can measure faster than their traditional counterparts.
“One of the motivations in doing quantum science is that in the practical world, it turns out that if you harness the laws of quantum mechanics, you can do a lot better in many areas,” said Connor Holland, a graduate student in the physics department and a co-author on the work.
The ability of quantum devices to outperform classical ones is known as “quantum advantage.” And at the core of quantum advantage are the principles of superposition and quantum entanglement. While a classical computer bit can assume the value of either 0 or 1, quantum bits, called qubits, can simultaneously be in a superposition of 0 and 1.
The latter concept, entanglement, is a major cornerstone of quantum mechanics and occurs when two particles become inextricably linked with each other so that this link persists, even if one particle is light years away from the other particle. It is the phenomenon that Albert Einstein, who at first questioned its validity, described as “spooky action at a distance.”
Since then, physicists have demonstrated that entanglement is, in fact, an accurate description of the physical world and how reality is structured.
“Quantum entanglement is a fundamental concept,” said Cheuk, “but it is also the key ingredient that bestows quantum advantage.”
But building quantum advantage and achieving controllable quantum entanglement remains a challenge, not least because engineers and scientists are still unclear about which physical platform is best for creating qubits.
In the past decades, many different technologies—such as trapped ions, photons, and superconducting circuits, to name only a few—have been explored as candidates for quantum computers and devices. The optimal quantum system or qubit platform could very well depend on the specific application.
Until this experiment, however, molecules had long defied controllable quantum entanglement. But Cheuk and his colleagues found a way, through careful manipulation in the laboratory, to control individual molecules and coax them into these interlocking quantum states.
They also believed that molecules have certain advantages—over atoms, for example—that made them especially well-suited for certain applications in quantum information processing and quantum simulation of complex materials. Compared to atoms, for example, molecules have more quantum degrees of freedom and can interact in new ways.
“What this means, in practical terms, is that there are new ways of storing and processing quantum information,” said Yukai Lu, a graduate student in electrical and computer engineering and a co-author of the paper. “For example, a molecule can vibrate and rotate in multiple modes. So, you can use two of these modes to encode a qubit. If the molecular species is polar, two molecules can interact even when spatially separated.”
Nonetheless, molecules have proven notoriously difficult to control in the laboratory because of their complexity. The very degrees of freedom that make them attractive also make them hard to control or corral in laboratory settings.
Cheuk and his team addressed many of these challenges through a carefully thought-out experiment. They first picked a molecular species that is both polar and can be cooled with lasers. They then laser-cooled the molecules to ultracold temperatures, where quantum mechanics takes center stage.
Individual molecules were then picked up by a complex system of tightly focused laser beams, so-called “optical tweezers.” By engineering the positions of the tweezers, they were able to create large arrays of single molecules and individually position them into any desired one-dimensional configuration. For example, they created isolated pairs of molecules and defect-free strings of molecules.
Next, they encoded a qubit into a non-rotating and rotating state of the molecule. They were able to show that this molecular qubit remained coherent; that is, it remembered its superposition. In short, the researchers demonstrated the ability to create well-controlled and coherent qubits out of individually controlled molecules.
To entangle the molecules, they had to make the molecule interact. By using a series of microwave pulses, they were able to make individual molecules interact with one another in a coherent fashion.
By allowing the interaction to proceed for a precise amount of time, they were able to implement a two-qubit gate that entangled two molecules. This is significant because such an entangling two-qubit gate is a building block for both universal digital quantum computing and for simulation of complex materials.
The potential of this research for investigating different areas of quantum science is large, given the innovative features offered by this new platform of molecular tweezer arrays. In particular, the Princeton team is interested in exploring the physics of many interacting molecules, which can be used to simulate quantum many-body systems where interesting emergent behavior, such as novel forms of magnetism, can appear.
“Using molecules for quantum science is a new frontier, and our demonstration of on-demand entanglement is a key step in demonstrating that molecules can be used as a viable platform for quantum science,” said Cheuk.
In a separate article published in the same issue of Science, an independent research group led by John Doyle and Kang-Kuen Ni at Harvard University and Wolfgang Ketterle at the Massachusetts Institute of Technology achieved similar results.
“The fact that they got the same results verifies the reliability of our results,” Cheuk said. “They also show that molecular tweezer arrays are becoming an exciting new platform for quantum science.”
The Federal Electronic Health Record Modernization Office, along with ISO – International Organization for Standardization and Centers for Disease Control and Prevention Division of Readiness and Response Science, is proud to publish a new standard that helps countries better prepare for national and international public #health emergencies. The standard helps collect, manage and predict public health emergency preparedness and response. It’s a global game-changer for managing pandemics, outbreaks, toxic exposure and hazardous events. Read more about it at https://lnkd.in/e7qBWWwP. #healthinteroperabilty #healthdatastandards #ISO #GlobalHealthEngagement
ANSI SERVES AS ISO TC 215 SECRETARIAT
In an effort to assure better preparedness for national and international public health emergencies, the International Organization for Standardization (ISO) Technical Committee (TC) 215, Health Informatics, has developed a newly released standard that provides business requirements, terminology, and vocabulary for public health emergency preparedness and response (PH EPR) information systems. The standard is applicable to emergencies that encompass emerging pathogens, including COVID-19, chemical and nuclear accidents, environmental disasters, criminal acts, and bioterrorism.
The international standard, ISO 5477:2023, is relevant to policy makers, regulators, project planners, and management of PH EPR information systems, PH EPR data analysts, and informaticians, and may also be of interest to stakeholders including incident managers, PH educators, standards developers, and academia.
Information that drives a decision-making process is the most critical asset during all phases of PH emergencies. To that end, PH EPR information systems play a critical role in fulfilling major PH emergency response functions, including plans and procedures; physical infrastructure; information and communication technology (ICT) infrastructure; information systems and standards; and human resources.
The standard sets forth business rules for PH EPR information systems, and includes an informative framework for mapping existing semantic interoperability standards for emergency preparedness and response to PH EPR information systems. The document, which included input from 34 nations, was developed based on concepts and methodology described in:
The World Health Organization (WHO) Framework for a Public Health Operations Centre and Supporting WHO Handbooks A and C
ISO 30401, Knowledge management systems requirements
ISO 13054, Knowledge management of health information standards
ISO 22300, Security and resilience vocabulary
ISO 22320, Security and resilience emergency management guidelines for incident management
ISO 1087, Terminology work and terminology science
“This standard is designed to engage all global stakeholders involved in responding to public health emergencies. It fosters collaboration among participants committed to advancing the Global Health Security Agenda through enhanced information exchange,” said Dr. Nikolay Lipskiy, health scientist, Centers for Disease Control and Prevention (CDC), and project leader. “Our primary objective is to reduce barriers to information interoperability, thus improving critical data timeliness and usability. We anticipate that this pioneering standard will pave the way for additional documents focusing on more specific aspects in the near future.”
“It is crucial to acknowledge the invaluable contributions of the participants in our standard development team, with special recognition to the Federal Electronic Health Record Modernization (FEHRM) office,” added Dr. Lipskiy. “Their dedicated efforts have significantly elevated the quality and impact of this standard, demonstrating a collective commitment to advancing global public health infrastructure.”
About the U.S. TAG to ISO/TC 215, Health Informatics
The U.S. TAG to ISO TC 215, Health Informatics, represents national interests on health information technology (HIT) and health informatics standards at ISO. ANSI administers the U.S. TAG to ISO TC 215 to coordinate national standards activities for existing and emerging health sectors. The U.S. TAG is guided by the ANSI cardinal principles of consensus, due process, and openness.
The scope of ISO TC 215, and consequently of the U.S. TAG, is standardization in the field of health informatics, to facilitate capture, interchange, and use of health-related data, information, and knowledge to support and enable all aspects of the health system.
Sometimes the things we do with technology are like smoking on an airplane. ‘It seemed like a good idea at the time.’
Take, for example, the use of AI in predicting criminal behavior. Guilty as charged! I used to do that for the Dutch police, with great success, and everybody involved thought it was a good idea and a fair compromise between fundamental rights and societal benefits. We were right then, but we would be wrong now! The AI Act forbids it. Similarly, we once heralded data as the “new oil,” eagerly linking personal data across marketing databases to forecast human behavior. Only later did we realize the potential harm, prompting the introduction of GDPR to protect personal data.
The just passed(!) European AI Act represents a similar milestone. It reflects a growing understanding that we must more cautiously regulate certain technologies. I must say it scares me a little bit to think of future AI initiatives that later ‘seemed like a good idea at the time’. Autonomous weapons? Multi-purpose household robots? Computers doing all the programming? Fingers crossed for our world. May it all work out great!
About the AI act: just like with every step of this work, it is unfortunately a puzzle to find out exactly what was agreed upon, and we’ll need to wait a few more months for the full text.
Since my focus is on engineering and cyber security: the Act requires high-risk systems to ensure cyber security and also to undergo adversarial testing, with some exceptions. This will require security standards to be ready for use. I happen to be working in CEN/CENELEC on this, supported by the OWASP AI Exchange at owaspai.org. Wish us luck, or better yet, join the AI Exchange or join CEN/CENELEC through your national Standardization organization.
European Union officials have reached a provisional deal on the world’s first comprehensive laws to regulate the use of artificial intelligence.
After 36 hours of talks, negotiators agreed rules around AI in systems like ChatGPT and facial recognition.
The European Parliament will vote on the AI Act proposals early next year, but any legislation will not take effect until at least 2025.
The US, UK and China are all rushing to publish their own guidelines.
The proposals include safeguards on the use of AI within the EU as well as limitations on its adoption by law enforcement agencies.
Consumers would have the right to launch complaints and fines could be imposed for violations.
EU Commissioner Thierry Breton described the plans as “historic”, saying it set “clear rules for the use of AI”.
He added it was “much more than a rulebook – it’s a launch pad for EU start-ups and researchers to lead the global AI race”.
European Commission President Ursula von der Leyen said the AI Act would help the development of technology that does not threaten people’s safety and rights.
In a social media post, she said it was a “unique legal framework for the development of AI you can trust”.
The European Parliament defines AI as software that can “for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations or decisions influencing the environments they interact with”.
ChatGPT and DALL-E are examples of what is called “generative” AI. These programs learn from vast quantities of data, such as online text and images, to generate new content which feels like it has been made by a human.
So-called “chatbots” – like ChatGPT – can have text conversations. Other AI programs like DALL-E can create images from simple text instructions.
Modern enterprises are powered by distributed software applications that need an always-on, secured, responsive and global, optimized access. A secured, hybrid cloud strategy is very important to deliver this application experience for internal and external users. Our vision for hybrid cloud is clear: to help clients accelerate positive business outcomes by building, deploying and managing applications and services anytime, anywhere.
Traditional CloudOps and DevOps models that involve manual workflows may not deliver the required application experience. IBM strongly believes it’s time for a new approach, one driven by the applications themselves. The new paradigm is to simplify hybrid and multicloud application delivery with secured, performant application-centric networking, to help increase application velocity and improve collaboration between IT teams.
Just as cloud can provide a virtual platform to consume underlying resources like compute and storage, app-centric connectivity offers a new network overlay focused on application and service endpoints connectivity. It’s totally abstracted from the underlying networks that provide physical connectivity, and hence is highly simplified.
How can application-centric connectivity help IT teams? For CloudOps teams, this approach helps achieve visibility and optimization. For DevOps, it helps achieve business agility. Both teams can benefit from better team collaboration with a common user experience (UX), custom topology views and the ability to manage and view SLOs and resource status.
The new network paradigm in action: IBM Hybrid Cloud Mesh
IBM Hybrid Cloud Mesh, a multicloud networking solution announced earlier this year, is now available. This new SaaS product is designed to allow organizations to establish simple, scalable secured application-centric connectivity. The product is also designed to be predictable with respect to latency, bandwidth and cost. It is engineered for both CloudOps and DevOps teams to seamlessly manage and scale network applications, including cloud-native ones running on Red Hat OpenShift.
You’ll find a seamless on-ramp for applications and services across heterogeneous other environments; for example, when combining Hybrid Cloud Mesh with DNS traffic steering capabilities of IBM NS1 Connect, a SaaS solution for content, services and application delivery to millions of users.
Architecture of IBM Hybrid Cloud Mesh:
Two main architecture components are key to how the product is designed to work:
The Mesh Manager provides the centralized management and policy plane with observability.
Gateways implement the data plane of Hybrid Cloud Mesh and act as virtual routers and connectors. These are centrally managed through Mesh Manager and deployed both in the cloud and on customer premises. There are two types of gateways: 1) Edge Gateway, deployed near workloads for forwarding, security enforcement, load balancing, and telemetry data collection; and 2) Waypoint, deployed at Points of Presence (POPs) close to internet exchanges and colocation points for path, cost and topology optimization
Key features of IBM Hybrid Cloud Mesh:
Continuous infrastructure and application discovery: Mesh Manager continuously discovers and updates multicloud deployment infrastructure, making the discovery of deployed applications and services an automated experience. Continuous discovery allows Mesh Manager to maintain awareness of changes in the cloud assets.
Seamless connectivity: DevOps or CloudOps can express their connectivity intent through the UI or CLI, and Mesh connects the specified workloads, regardless of their location.
Security: Built on the principles of zero-trust, Mesh allows communication based on user intent only. All gateways are signed, and threat surface is addressed since they can be configured only through Mesh Manager.
Observability: Mesh provides comprehensive monitoring through the Mesh Manager day0/day1 UI, offering details on deployment environments, gateways, services and connectivity metrics.
Traffic Engineering Capabilities:Leveraging waypoints, Hybrid Cloud Mesh is designed to optimize paths for cost, latency and bandwidth, to enhance application performance and security.
Integrated Workflows: DevOps, NetOps, SecOps and FinOps workflows unite in a symphony of collaboration, providing end-to-end application connectivity through a single, harmonious pane of glass.
Take the next step with Hybrid Cloud Mesh
We are excited to showcase a tech preview of Hybrid Cloud Mesh supporting the use of Red Hat Service Interconnect gateways simplifying application connectivity and security across platforms, clusters and clouds. Red Hat Service Interconnect, announced 23 May 2023 at Red Hat Summit, creates connections between services, applications and workloads across hybrid necessary environments.
We’re just getting started on our journey building comprehensive hybrid multicloud automation solutions for the enterprise. Hybrid Cloud Mesh is not just a network solution; it’s engineered to be a transformative force that empowers businesses to derive maximum value from modern application architecture, enabling hybrid cloud adoption and revolutionizing how multicloud environments are utilized. We hope you join us on the journey.
I’m COO at Unstoppable Domains, a Web3 digital identity platform
I was recently engaged in a conversation about my work on Watson back in 2015 when a comment I received confused with me: “AI just started! Was ChatGPT created in 2015?” I realized then that many people didn’t realize the depth of AI history.
First of all, what is AI? Artificial Intelligence, a specialty within computer science, focuses on creating systems that replicate human intelligence and problem-solving abilities. These systems learn from data, process information, and refine their performance over time, distinguishing them from conventional computer programs that require human intervention for improvement.
The landscape of artificial intelligence (AI) is a testament to the relentless pursuit of innovation by technologists that have shaped its trajectory. In this journey, key players have emerged, each contributing to the evolution of AI in unique and transformative ways.
The Way Back When
The history of AI can be traced back to the 1950s, when Alan Turing published “Computer Machinery and Intelligence,” introducing the Turing Test as a measure of computer intelligence. At the same time, John McCarthy, a “founding father” of AI, created LISP, the first programming language for AI research which is still used today
A period of low interest and funding in AI followed due to setbacks in the machine market and expert systems. Despite decreased funding, the early 90s brought AI into everyday life with innovations like the Roomba and speech recognition software.
The surge in interest was followed by a new funding for research, which allowed even more progress to be made.
IMPACT IBM (1997-2011)
IBM started its AI adventure when Deep Blue beat the world chess champion, Gary Kasparov, in a highly-publicized match in 1977, becoming the first program to beat a human chess champion. In 2011, IBM then created Watson, a Question Answering (QA) systems. Watson went on to win Jeopardy! against two former champions in a televised game. Recognized globally for this groundbreaking performance on Jeopardy! Watson showcased the transformative potential of cognitive computing.
Innovation in this period revolutionized industries such as healthcare, finance, customer service, and research. In the healthcare sector, Watson’s cognitive capabilities proved instrumental for medical professionals, offering support in diagnosing and treating complex diseases. By analyzing medical records, research papers, and patient data, it facilitated precision medicine practices, empowering healthcare practitioners with valuable insights into potential treatment options. Watson’s transformative impact extended to customer service, where it reshaped interactions through the provision of intelligent virtual assistants.
Watson’s influence also reached the realm of research and development, empowering researchers to analyze vast amounts of scientific literature. This catalyzed the discovery of new insights and potential breakthroughs by uncovering patterns, correlations, and solutions hidden within extensive datasets.
“Watson was one of the first usable AI engines for the Enterprise,” said Arvind Krishna, CEO of IBM. “IBM continues to drive innovation in AI and Generative AI to help our customers move forward.”
Watson’s legacy is profound, showcasing the formidable power of AI in understanding human language, processing vast datasets, and delivering valuable insights across multiple industries. Its pioneering work in natural language processing and cognitive computing set the stage for subsequent innovations like ChatGPT, marking a transformative era in the evolution of artificial intelligence. IBM continues to innovate in AI today.
The Assistants and Beyond: Amazon AMZN and Apple AAPL(2011-2014)
Amazon’s AI foray with Alexa and Apple’s Siri both marked significant leaps in human-computer interaction. These voice-controlled virtual assistants transformed how users can access information, control environments, and shop online, showcasing AI’s potential for daily life improvement.
Beyond voice assistance, Amazon leverages AI for personalized recommendations on its e-commerce platform, enhancing the customer shopping experience. Additionally, the company’s robust cloud computing service, Amazon Web Services (AWS), provides scalable and efficient infrastructure for AI development, enabling businesses and developers to leverage cutting-edge machine learning capabilities. Amazon’s commitment to advancing AI technologies aligns with its vision of making intelligent and intuitive computing accessible to users in various aspects of their daily lives, from the living room to the online marketplace.
Google GOOG: Deep Learning Breakthroughs (2012-2019)
Google, synonymous with innovation, has been a driving force in AI research. The development of DeepMind, a subsidiary of Google, marked a turning point with groundbreaking achievements in deep learning and reinforcement learning. For example, two researchers from Google (Jeff Dean and Andrew Ng) trained a neural network to recognize cats by showing unlabeled images with no background information.
Google’s commitment to democratizing AI is evident through TensorFlow, an open-source machine learning library empowering developers worldwide to create and deploy AI applications efficiently. Additionally, Google’s advancements in natural language processing, image recognition, and predictive algorithms have shaped the landscape of AI applications across diverse domains with Google Search, Photos, and Assistant demonstrating a commitment to enhancing user experiences and making AI an integral part of daily life.
OpenAI: Expanding the Horizons of Natural Language Processing (2020- present)
Generative AI (GEN AI) refers to a category of artificial intelligence systems designed to generate content, often in the form of text, images, or other media, that is contextually relevant and resembles content created by humans. Unlike traditional AI models that may follow pre-programmed rules or make predictions based on existing data, generative AI can produce original and diverse outputs.
One prominent example of generative AI is OpenAI’s GPT (Generative Pre-trained Transformer) series, including models like GPT-3. In late 2022, ChatGPT made headlines by attracting 1 million users within a week of its launch. By early November, the platform had amassed over 200 million monthly users, showcasing the significant impact of OpenAI’s innovations on the AI landscape.
This success story underscores OpenAI’s commitment to advancing NLP and its ability to deliver platforms that resonate with a vast user base. The influence extends beyond user metrics; it has played a pivotal role in the development of subsequent innovations like GPT-3, reinforcing OpenAI’s position as a trailblazer in conversational AI. Microsoft’sMSFT strategic investment further validates OpenAI’s crucial role in the evolution of AI and NLP, signifying industry-wide recognition of its contributions.
On November 6th, OpenAI introduced GPTs, custom iterations of ChatGPT, which amalgamate instructions, extended knowledge, and actionable insights. The launch of the assistants API facilitates the seamless integration of assistant experiences with individual applications. These advancements are viewed as foundational steps toward the realization of AI agents, with OpenAI committed to enhancing their capabilities over time. The introduction of the new GPT-4 turbo model brings forth improvements in function calling, knowledge incorporation, pricing adjustments, support for new modalities, and more. Additionally, OpenAI now provides a copyright shield for enterprise clients, exemplifying their ongoing commitment to innovation and client support in the evolving landscape of generative AI.
Built for safety: Anthropic (2021- present)
Anthropic is an AI safety startup founded in 2021 that leverages constitutional AI, an approach to train models to be helpful, harmless, and honest. The research team then developed CLAIRE, a large language model trained using the same constitutional basis. Anthropic’s’s research-driven approach and focus on AI safety place them at the forefront of developing responsible and beneficial AI systems as does the use of techniques like data filtering and controlled training environments to avoid biases or errors. Focused on common sense — the AI understands intuitive physics, psychology, and social norms, allowing it to give reasonable answers. Both Google and Amazon have invested in Anthropic.
Intel INTC and NVIDIA NVDADIA: Powering the AI Revolution Through Hardware
Companies like Intel and NVIDIA have played a crucial and often underestimated role in the AI landscape by providing the hardware infrastructure that underpins remarkable advancements. Their development of powerful processors and graphics processing units (GPUs) optimized for machine learning tasks has been pivotal,facilitating not only the training but also the deployment of complex AI models, accelerating the pace of innovation in the field.
What’s Next? Responsible AI
Responsible AI refers to the design, development, and deployment of AI systems in an ethical and socially-aware manner. As AI becomes more powerful and ubiquitous, practitioners must consider the impacts these systems have on individuals and society. Responsible AI encompasses principles such as transparency, explainability, robustness, fairness, accountability, privacy, and human oversight.
Developing responsible AI systems requires proactive consideration of ethical issues during all stages of the AI lifecycle. Organizations should conduct impact assessments to identify potential risks and harms, particularly for marginalized groups. Teams should represent diverse perspectives when designing, building, and testing systems to reduce harmful bias which is why companies like Credo AI have jumped in to focus on a responsible framework.
Credo AI is an AI governance platform that streamlines responsible AI adoption by automating AI oversight, risk mitigation, and regulatory compliance. Their founder and CEO, Navrina Singh commented, “The next frontier for AI is responsible AI. We must remain steadfast in mitigating the risks associated with artificial intelligence.”
What’s Next? The Physical World
Spatial AI and robotic vision represent an evolution in how artificial intelligence systems perceive and interact with the physical world. By integrating spatial data like maps and floorplans with computer vision, itallows robots and drones to navigate and operate safely. Robotic vision systems can now identify objects, read text, and interpret scenes in 3D space, giving robots unprecedented awareness of their surroundings and the mobility to take on more complex real-world tasks.
New spatial capabilities are unlocking tremendous economic potential beyond the $17B already raised for AI vision startups. Warehouse automation, last-mile delivery, autonomous vehicles, and advanced manufacturing are all powered by spatial AI and computer vision, and over time they will offer more dynamic and versatile interactions with physical environments. This could enable revolutionary applications, from robot-assisted surgery to fully-autonomous transportation.
What’s Next? Customer-Centric AI Apps:
There is a shift happening right now in AI, from technology-centric solutions that solve infrastructure problems to customer-centric applications that solve our real-world human problems comprehensively.This evolution signals a move beyond the initial excitement of AI’s capabilities to a phase where technology meets the diverse and complex needs of end-users.
While the concept of a data flywheel remains relevant (reminder of the FlyWheel – more usage → more data → better model → more usage) , the sustainability of data moats is on shaky ground. The real moats, it seems, are in the customers themselves. Engagement depth, productivity benefits, and monetization strategies are emerging as more durable sources of competitive advantage.
Hassan Sawaf, CEO and Founder of AIXplain, highlights the power of a customer focused engagement approach. “We believe in AI agents that can help swiftly craft personalized solutions by leveraging cutting-edge technology from leading AI providers in real time. That’s what we’ve created with Bel Esprit and 40,000 state-of-the-art models with the capability to onboard hundreds of thousands more from platforms with proprietary sources in minutes. One click is all it takes for deployment, making Bel Esprit a game-changer in the AI landscape.”
Reflection
As we reflect on the success of what is now recognized as GEN AI, it is imperative to acknowledge the collective contributions of the tech titans that I’ve mentioned. Their advancements in deep learning, natural language processing, accessibility, and hardware infrastructure have not only shaped the trajectory of AI, but have also ushered in an era where intelligent technologies play an integral role in shaping our digital future.
Celebrating the pioneers in AI emphasizes their innovative spirit and unwavering commitment to advancing the boundaries of technology, paving the way for the sophisticated AI models that define our current era. A round of applause for the diligent technologists working at these companies.
I’m COO at Unstoppable Domains, and Alumni of AWS and IBM. I’m also a chairwoman on the board of the nonprofit Girls in Tech, a former member of the Diversity Committee at the World Economic Forum, and currently a founding member of the Blockchain Friends Forever social movement for women in Web3. I hold and trade modest amounts of ETH and BTC. These days I’m passionate about enterprise use cases for decentralized technologies. My latest book “The Tiger and the Rabbit” is shipping on 8/30! It’s. Business Fable about AI, Web3 and the Metaverse!
The Pentagon needs “to find a way to hold people accountable” for what artificial intelligence technologies do in future conflicts, according to Air Force Secretary Frank Kendall.
Humans will ultimately be held responsible for the use or misuse of artificial intelligence technologies during military conflicts, a top Department of Defense official said during a panel discussion at the Reagan National Defense Forum on Saturday.
Air Force Secretary Frank Kendall dismissed the notion “of the rogue robot that goes out there and runs around and shoots everything in sight indiscriminately,” highlighting the fact that AI technologies — particularly those deployed on the battlefields of the future — will be governed by some level of human oversight.
“I care a lot about civil society and the rule of law, including laws of armed conflict,” he said. “Our policies are written around compliance with those laws. You don’t enforce laws against machines; you enforce them against people. And I think our challenge is not to somehow limit what we can do with AI, but it’s to find a way to hold people accountable for what the AI does.”
Even as the Pentagon continues to experiment with AI, the department has worked to establish safeguards around its use of the technologies. DOD updated its decades-old policy on autonomous weapons in February to clarify, in part, that weapons with AI-enabled capabilities need to follow the department’s AI guidelines.
The goal for now, Kendall said, is to build confidence and trust in the technology and then “get it into field capabilities as quickly as we can.”
“The critical parameter on the battlefield is time,” he added. “And AI will be able to do much more complicated things much more accurately and much faster than human beings can.”
Kendall pointed to two specific mistakes that AI could make “in a lethal area,” including not engaging a target that it should have engaged or engaging civilian targets and U.S. military assets and allies. These possibilities, he said, necessitate more defined rules for holding operators responsible when they do occur.
“We are still going to have to find ways to manage this technology, manage its application and hold human beings accountable for when it doesn’t comply with the rules that we already have,” he added. “I think that’s the approach we need to take.”
For the time being, however, the Pentagon’s uses of AI are largely focused on processing large amounts of data for more administrative-oriented tasks.
For the time being, however, the Pentagon’s uses of AI are largely focused on processing large amounts of data for more administrative-oriented tasks.
“There are enormous possibilities here, but it is not anywhere near general human intelligence equivalents,” Kendall said, citing pattern recognition and “deep data analytics to associate things from an intelligence perspective” as AI’s most effective applications.
During a discussion last month, Schuyler Moore — the chief technology officer for U.S. Central Command — cited AI’s uneven performance and said that during military conflicts, officials “will more frequently than not put it to the side or use it in very, very select contexts where we feel very certain of the risks associated.”
But concerns still remain about how these tools will ultimately be used to enhance future warfighting capabilities, and the specific policies that are needed to enforce safeguards.
Rep. Mike Gallagher, R-Wis. — who chairs the House Select Committee on the Chinese Communist Party and was a former co-chair of the Cyberspace Solarium Commission — said “we need to have a plan for whether and how we are going to quickly adopt [AI] across multiple battlefield domains and warfighting capabilities.”
“I’m not sure we’ve thought through that,” Gallagher added.
Legislation from Sens. Gary Peters, D-Mich., and Joni Ernst, R-Iowa, would “streamline procedures” for both solicitation and awards by slimming down the procurement process.
A new bipartisan proposal seeks to simplify the federal contracting process — and potentially allow for more small businesses to work with the government — by reducing burdensome requirements and creating “a more nimble and meaningful bidding process and evaluation of proposals.”
The Conforming Procedures for Federal Task and Delivery Order Contracts Act was introduced by Sens. Gary Peters, D-Mich., and Joni Ernst, R-Iowa, on Jan. 19.
The bill seeks “to streamline procedures for solicitation and the awarding of task and delivery order contracts for agencies” by shrinking “the procurement process for contractors bidding on work as well as for the government, ensuring necessary due diligence is done while allowing awards to be made faster and to a wider array of contractors, including small businesses.”
This includes reducing “duplication of documentation requirements for agencies” and applying some of the contracting measures that the Department of Defense “currently has in place to all federal agencies.”
Ernst — the ranking member of the Senate Small Business and Entrepreneurship Committee — said in a statement that “too much bureaucratic red tape stands in the way” when it comes to smaller companies effectively competing for federal contracts.
“By making the award process faster and wider, Iowa’s small businesses and entrepreneurs can better compete and succeed,” she added, referencing the benefits the bill would have for her Hawkeye State constituents.
In a statement, Peters also said the legislation “streamlines the contracting process for federal government agencies, and as a result will boost small businesses trying to stay competitive and will increase efficiency for all government agencies, benefitting people across the nation.”
This isn’t the first time that Peters and Ernst have teamed up on legislation to improve the government’s procurement process, which is receiving renewed attention as lawmakers discuss the role that emerging technologies can play in bolstering the capabilities of federal services.
The senators previously authored legislation, known as the PRICE Act, to “promote innovative acquisition techniques and procurement strategies” to improve the contracting process for small businesses. Their bill was signed into law in February 2022.
Peters and Ernst also introducedlegislation in July 2022 that would require the Office of Management and Budget and the General Services Administration “to streamline the ability of the federal government to purchase commercial technology and provide specific training for information and communications technology acquisition.”
Following a Jan. 10 Senate Homeland Security and Governmental Affairs Committee hearing on how artificial intelligence can be used to improve government services, Peters — who chairs the panel — also told Nextgov/FCW “how the federal government procures AI… is going to have a big impact on AI throughout the economy.”
“And I think that’s a very effective way for us to think about AI regulation, through the procurement process,” he said.
It’s the first day of IBM Quantum Summit 2023, and we are thrilled to share a bevy of announcements and updates with you.
At today’s event, we’re presenting new capabilities we’re developing in order to support the next wave of quantum users: quantum computational scientists. In addition to unveiling an operational IBM Quantum System Two, we’re sharing our newest, highest-performing quantum processor yet, Heron—and demonstrating how we’re scaling quantum processors with the 1,121-qubit IBM Condor processor.
We’re also introducing Qiskit 1.0, Qiskit’s first stable release, alongside Qiskit Patterns—a framework for quantum computational scientists to do meaningful scalable work with quantum algorithms. With Qiskit Patterns, users can seamlessly create quantum algorithms and applications from a collection of foundational building blocks and execute those Patterns using heterogeneous computing infrastructure such as Quantum Serverless, now available as a beta release. We’re also deploying new execution modes so computational scientists can maximize performance from our hardware while they run utility-scale workloads.
And finally we’re sharing our new roadmap, laying out a vision for quantum computing all the way to 2033. This is the most exciting time in quantum computing to date, and we’re so proud to share with you. Head over to the IBM blog for more details.