healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

App-centric connectivity: A new paradigm for a multicloud world – IBM

Posted by timmreardon on 12/09/2023
Posted in: Uncategorized.

December 7, 2023By Murali Gandluru

Modern enterprises are powered by distributed software applications that need an always-on, secured, responsive and global, optimized access. A secured, hybrid cloud strategy is very important to deliver this application experience for internal and external users. Our vision for hybrid cloud is clear: to help clients accelerate positive business outcomes by building, deploying and managing applications and services anytime, anywhere.

Traditional CloudOps and DevOps models that involve manual workflows may not deliver the required application experience. IBM strongly believes it’s time for a new approach, one driven by the applications themselves. The new paradigm is to simplify hybrid and multicloud application delivery with secured, performant application-centric networking, to help increase application velocity and improve collaboration between IT teams.

Just as cloud can provide a virtual platform to consume underlying resources like compute and storage, app-centric connectivity offers a new network overlay focused on application and service endpoints connectivity. It’s totally abstracted from the underlying networks that provide physical connectivity, and hence is highly simplified.

How can application-centric connectivity help IT teams? For CloudOps teams, this approach helps achieve visibility and optimization. For DevOps, it helps achieve business agility. Both teams can benefit from better team collaboration with a common user experience (UX), custom topology views and the ability to manage and view SLOs and resource status.

The new network paradigm in action: IBM Hybrid Cloud Mesh

IBM Hybrid Cloud Mesh, a multicloud networking solution announced earlier this year, is now available. This new SaaS product is designed to allow organizations to establish simple, scalable secured application-centric connectivity. The product is also designed to be predictable with respect to latency, bandwidth and cost. It is engineered for both CloudOps and DevOps teams to seamlessly manage and scale network applications, including cloud-native ones running on Red Hat OpenShift.

You’ll find a seamless on-ramp for applications and services across heterogeneous other environments; for example, when combining Hybrid Cloud Mesh with DNS traffic steering capabilities of IBM NS1 Connect, a SaaS solution for content, services and application delivery to millions of users.

Architecture of IBM Hybrid Cloud Mesh:

Two main architecture components are key to how the product is designed to work:

  • The Mesh Manager provides the centralized management and policy plane with observability. 
  • Gateways implement the data plane of Hybrid Cloud Mesh and act as virtual routers and connectors. These are centrally managed through Mesh Manager and deployed both in the cloud and on customer premises. There are two types of gateways: 1) Edge Gateway, deployed near workloads for forwarding, security enforcement, load balancing, and telemetry data collection; and 2) Waypoint, deployed at Points of Presence (POPs) close to internet exchanges and colocation points for path, cost and topology optimization

Key features of IBM Hybrid Cloud Mesh:

  • Continuous infrastructure and application discovery: Mesh Manager continuously discovers and updates multicloud deployment infrastructure, making the discovery of deployed applications and services an automated experience. Continuous discovery allows Mesh Manager to maintain awareness of changes in the cloud assets.
  • Seamless connectivity: DevOps or CloudOps can express their connectivity intent through the UI or CLI, and Mesh connects the specified workloads, regardless of their location.
  • Security: Built on the principles of zero-trust, Mesh allows communication based on user intent only. All gateways are signed, and threat surface is addressed since they can be configured only through Mesh Manager.
  • Observability: Mesh provides comprehensive monitoring through the Mesh Manager day0/day1 UI, offering details on deployment environments, gateways, services and connectivity metrics.
  • Traffic Engineering Capabilities:Leveraging waypoints, Hybrid Cloud Mesh is designed to optimize paths for cost, latency and bandwidth, to enhance application performance and security.
  • Integrated Workflows: DevOps, NetOps, SecOps and FinOps workflows unite in a symphony of collaboration, providing end-to-end application connectivity through a single, harmonious pane of glass. 

Take the next step with Hybrid Cloud Mesh

We are excited to showcase a tech preview of Hybrid Cloud Mesh supporting the use of Red Hat Service Interconnect gateways simplifying application connectivity and security across platforms, clusters and clouds. Red Hat Service Interconnect, announced 23 May 2023 at Red Hat Summit, creates connections between services, applications and workloads across hybrid necessary environments.

We’re just getting started on our journey building comprehensive hybrid multicloud automation solutions for the enterprise. Hybrid Cloud Mesh is not just a network solution; it’s engineered to be a transformative force that empowers businesses to derive maximum value from modern application architecture, enabling hybrid cloud adoption and revolutionizing how multicloud environments are utilized. We hope you join us on the journey.

Article link: https://www.ibm.com/blog/announcement/app-centric-connectivity-a-new-paradigm-for-a-multicloud-world/?

The Evolution Of AI: From IBM And AWS To OpenAI and Anthropic – Forbes

Posted by timmreardon on 12/07/2023
Posted in: Uncategorized.

Sandy CarterContributor

I’m COO at Unstoppable Domains, a Web3 digital identity platform

I was recently engaged in a conversation about my work on Watson back in 2015 when a comment I received confused with me: “AI just started! Was ChatGPT created in 2015?” I realized then that many people didn’t realize the depth of AI history.

First of all, what is AI? Artificial Intelligence, a specialty within computer science, focuses on creating systems that replicate human intelligence and problem-solving abilities. These systems learn from data, process information, and refine their performance over time, distinguishing them from conventional computer programs that require human intervention for improvement.

The landscape of artificial intelligence (AI) is a testament to the relentless pursuit of innovation by technologists that have shaped its trajectory. In this journey, key players have emerged, each contributing to the evolution of AI in unique and transformative ways.

The Way Back When

The history of AI can be traced back to the 1950s, when Alan Turing published “Computer Machinery and Intelligence,” introducing the Turing Test as a measure of computer intelligence. At the same time, John McCarthy, a “founding father” of AI, created LISP, the first programming language for AI research which is still used today

The 1970s saw significant milestones including the autonomous Stanford Cartnavigating a room full of chairs and the founding of the American Association of Artificial Intelligence, now known as the Association for the Advancement of Artificial Intelligence (AAAI).

The AI Winter: 1987-1993

A period of low interest and funding in AI followed due to setbacks in the machine market and expert systems. Despite decreased funding, the early 90s brought AI into everyday life with innovations like the Roomba and speech recognition software.

The surge in interest was followed by a new funding for research, which allowed even more progress to be made.

IMPACT IBM (1997-2011)

IBM started its AI adventure when Deep Blue beat the world chess champion, Gary Kasparov, in a highly-publicized match in 1977, becoming the first program to beat a human chess champion. In 2011, IBM then created Watson, a Question Answering (QA) systems. Watson went on to win Jeopardy! against two former champions in a televised game. Recognized globally for this groundbreaking performance on Jeopardy! Watson showcased the transformative potential of cognitive computing.

Innovation in this period revolutionized industries such as healthcare, finance, customer service, and research. In the healthcare sector, Watson’s cognitive capabilities proved instrumental for medical professionals, offering support in diagnosing and treating complex diseases. By analyzing medical records, research papers, and patient data, it facilitated precision medicine practices, empowering healthcare practitioners with valuable insights into potential treatment options. Watson’s transformative impact extended to customer service, where it reshaped interactions through the provision of intelligent virtual assistants.

Watson’s influence also reached the realm of research and development, empowering researchers to analyze vast amounts of scientific literature. This catalyzed the discovery of new insights and potential breakthroughs by uncovering patterns, correlations, and solutions hidden within extensive datasets.

“Watson was one of the first usable AI engines for the Enterprise,” said Arvind Krishna, CEO of IBM. “IBM continues to drive innovation in AI and Generative AI to help our customers move forward.”

Watson’s legacy is profound, showcasing the formidable power of AI in understanding human language, processing vast datasets, and delivering valuable insights across multiple industries. Its pioneering work in natural language processing and cognitive computing set the stage for subsequent innovations like ChatGPT, marking a transformative era in the evolution of artificial intelligence. IBM continues to innovate in AI today.

The Assistants and Beyond: Amazon AMZN and Apple AAPL(2011-2014)

Amazon’s AI foray with Alexa and Apple’s Siri both marked significant leaps in human-computer interaction. These voice-controlled virtual assistants transformed how users can access information, control environments, and shop online, showcasing AI’s potential for daily life improvement.

Beyond voice assistance, Amazon leverages AI for personalized recommendations on its e-commerce platform, enhancing the customer shopping experience. Additionally, the company’s robust cloud computing service, Amazon Web Services (AWS), provides scalable and efficient infrastructure for AI development, enabling businesses and developers to leverage cutting-edge machine learning capabilities. Amazon’s commitment to advancing AI technologies aligns with its vision of making intelligent and intuitive computing accessible to users in various aspects of their daily lives, from the living room to the online marketplace.

Google GOOG: Deep Learning Breakthroughs (2012-2019)

Google, synonymous with innovation, has been a driving force in AI research. The development of DeepMind, a subsidiary of Google, marked a turning point with groundbreaking achievements in deep learning and reinforcement learning. For example, two researchers from Google (Jeff Dean and Andrew Ng) trained a neural network to recognize cats by showing unlabeled images with no background information.

Google’s commitment to democratizing AI is evident through TensorFlow, an open-source machine learning library empowering developers worldwide to create and deploy AI applications efficiently. Additionally, Google’s advancements in natural language processing, image recognition, and predictive algorithms have shaped the landscape of AI applications across diverse domains with Google Search, Photos, and Assistant demonstrating a commitment to enhancing user experiences and making AI an integral part of daily life.

OpenAI: Expanding the Horizons of Natural Language Processing (2020- present)

Generative AI (GEN AI) refers to a category of artificial intelligence systems designed to generate content, often in the form of text, images, or other media, that is contextually relevant and resembles content created by humans. Unlike traditional AI models that may follow pre-programmed rules or make predictions based on existing data, generative AI can produce original and diverse outputs.

One prominent example of generative AI is OpenAI’s GPT (Generative Pre-trained Transformer) series, including models like GPT-3. In late 2022, ChatGPT made headlines by attracting 1 million users within a week of its launch. By early November, the platform had amassed over 200 million monthly users, showcasing the significant impact of OpenAI’s innovations on the AI landscape.

This success story underscores OpenAI’s commitment to advancing NLP and its ability to deliver platforms that resonate with a vast user base. The influence extends beyond user metrics; it has played a pivotal role in the development of subsequent innovations like GPT-3, reinforcing OpenAI’s position as a trailblazer in conversational AI. Microsoft’sMSFT strategic investment further validates OpenAI’s crucial role in the evolution of AI and NLP, signifying industry-wide recognition of its contributions.

On November 6th, OpenAI introduced GPTs, custom iterations of ChatGPT, which amalgamate instructions, extended knowledge, and actionable insights. The launch of the assistants API facilitates the seamless integration of assistant experiences with individual applications. These advancements are viewed as foundational steps toward the realization of AI agents, with OpenAI committed to enhancing their capabilities over time. The introduction of the new GPT-4 turbo model brings forth improvements in function calling, knowledge incorporation, pricing adjustments, support for new modalities, and more. Additionally, OpenAI now provides a copyright shield for enterprise clients, exemplifying their ongoing commitment to innovation and client support in the evolving landscape of generative AI.

Built for safety: Anthropic (2021- present)

Anthropic is an AI safety startup founded in 2021 that leverages constitutional AI, an approach to train models to be helpful, harmless, and honest. The research team then developed CLAIRE, a large language model trained using the same constitutional basis. Anthropic’s’s research-driven approach and focus on AI safety place them at the forefront of developing responsible and beneficial AI systems as does the use of techniques like data filtering and controlled training environments to avoid biases or errors. Focused on common sense — the AI understands intuitive physics, psychology, and social norms, allowing it to give reasonable answers. Both Google and Amazon have invested in Anthropic.

Intel INTC and NVIDIA NVDA DIA: Powering the AI Revolution Through Hardware

Companies like Intel and NVIDIA have played a crucial and often underestimated role in the AI landscape by providing the hardware infrastructure that underpins remarkable advancements. Their development of powerful processors and graphics processing units (GPUs) optimized for machine learning tasks has been pivotal,facilitating not only the training but also the deployment of complex AI models, accelerating the pace of innovation in the field.

What’s Next? Responsible AI

Responsible AI refers to the design, development, and deployment of AI systems in an ethical and socially-aware manner. As AI becomes more powerful and ubiquitous, practitioners must consider the impacts these systems have on individuals and society. Responsible AI encompasses principles such as transparency, explainability, robustness, fairness, accountability, privacy, and human oversight.

Developing responsible AI systems requires proactive consideration of ethical issues during all stages of the AI lifecycle. Organizations should conduct impact assessments to identify potential risks and harms, particularly for marginalized groups. Teams should represent diverse perspectives when designing, building, and testing systems to reduce harmful bias which is why companies like Credo AI have jumped in to focus on a responsible framework.

Credo AI is an AI governance platform that streamlines responsible AI adoption by automating AI oversight, risk mitigation, and regulatory compliance. Their founder and CEO, Navrina Singh commented, “The next frontier for AI is responsible AI. We must remain steadfast in mitigating the risks associated with artificial intelligence.”

What’s Next? The Physical World

Spatial AI and robotic vision represent an evolution in how artificial intelligence systems perceive and interact with the physical world. By integrating spatial data like maps and floorplans with computer vision, itallows robots and drones to navigate and operate safely. Robotic vision systems can now identify objects, read text, and interpret scenes in 3D space, giving robots unprecedented awareness of their surroundings and the mobility to take on more complex real-world tasks.

New spatial capabilities are unlocking tremendous economic potential beyond the $17B already raised for AI vision startups. Warehouse automation, last-mile delivery, autonomous vehicles, and advanced manufacturing are all powered by spatial AI and computer vision, and over time they will offer more dynamic and versatile interactions with physical environments. This could enable revolutionary applications, from robot-assisted surgery to fully-autonomous transportation.

What’s Next? Customer-Centric AI Apps:

There is a shift happening right now in AI, from technology-centric solutions that solve infrastructure problems to customer-centric applications that solve our real-world human problems comprehensively.This evolution signals a move beyond the initial excitement of AI’s capabilities to a phase where technology meets the diverse and complex needs of end-users.

While the concept of a data flywheel remains relevant (reminder of the FlyWheel – more usage → more data → better model → more usage) , the sustainability of data moats is on shaky ground. The real moats, it seems, are in the customers themselves. Engagement depth, productivity benefits, and monetization strategies are emerging as more durable sources of competitive advantage.

Hassan Sawaf, CEO and Founder of AIXplain, highlights the power of a customer focused engagement approach. “We believe in AI agents that can help swiftly craft personalized solutions by leveraging cutting-edge technology from leading AI providers in real time. That’s what we’ve created with Bel Esprit and 40,000 state-of-the-art models with the capability to onboard hundreds of thousands more from platforms with proprietary sources in minutes. One click is all it takes for deployment, making Bel Esprit a game-changer in the AI landscape.”

Reflection

As we reflect on the success of what is now recognized as GEN AI, it is imperative to acknowledge the collective contributions of the tech titans that I’ve mentioned. Their advancements in deep learning, natural language processing, accessibility, and hardware infrastructure have not only shaped the trajectory of AI, but have also ushered in an era where intelligent technologies play an integral role in shaping our digital future.

Celebrating the pioneers in AI emphasizes their innovative spirit and unwavering commitment to advancing the boundaries of technology, paving the way for the sophisticated AI models that define our current era. A round of applause for the diligent technologists working at these companies.

Get the best of Forbes to your inbox with the latest insights from experts across the globe.Follow me on Twitter or LinkedIn. 

Sandy Carter

I’m COO at Unstoppable Domains, and Alumni of AWS and IBM. I’m also a chairwoman on the board of the nonprofit Girls in Tech, a former member of the Diversity Committee at the World Economic Forum, and currently a founding member of the Blockchain Friends Forever social movement for women in Web3. I hold and trade modest amounts of ETH and BTC. These days I’m passionate about enterprise use cases for decentralized technologies. My latest book “The Tiger and the Rabbit” is shipping on 8/30! It’s. Business Fable about AI, Web3 and the Metaverse!

Article link: https://www-forbes-com.cdn.ampproject.org/c/s/www.forbes.com/sites/digital-assets/2023/11/07/the-evolution-of-ai-from-ibm-and-aws-to-openai-and-anthropic/amp/

Human operators must be held accountable for AI’s use in conflicts, Air Force secretary says – Nextgov

Posted by timmreardon on 12/07/2023
Posted in: Uncategorized.

By EDWARD GRAHAMDECEMBER 4, 2023

The Pentagon needs “to find a way to hold people accountable” for what artificial intelligence technologies do in future conflicts, according to Air Force Secretary Frank Kendall.

Humans will ultimately be held responsible for the use or misuse of artificial intelligence technologies during military conflicts, a top Department of Defense official said during a panel discussion at the Reagan National Defense Forum on Saturday.

Air Force Secretary Frank Kendall dismissed the notion “of the rogue robot that goes out there and runs around and shoots everything in sight indiscriminately,” highlighting the fact that AI technologies — particularly those deployed on the battlefields of the future — will be governed by some level of human oversight.

“I care a lot about civil society and the rule of law, including laws of armed conflict,” he said. “Our policies are written around compliance with those laws. You don’t enforce laws against machines; you enforce them against people. And I think our challenge is not to somehow limit what we can do with AI, but it’s to find a way to hold people accountable for what the AI does.”

Even as the Pentagon continues to experiment with AI, the department has worked to establish safeguards around its use of the technologies. DOD updated its decades-old policy on autonomous weapons in February to clarify, in part, that weapons with AI-enabled capabilities need to follow the department’s AI guidelines. 

The Pentagon previously issued a series of ethical AI principles in 2020 governing its use of the technologies, and released a data, analytics and AI adoption strategy in November that positioned quality of data as key to the department’s implementation of the advanced tech.

The goal for now, Kendall said, is to build confidence and trust in the technology and then “get it into field capabilities as quickly as we can.” 

“The critical parameter on the battlefield is time,” he added. “And AI will be able to do much more complicated things much more accurately and much faster than human beings can.”

Kendall pointed to two specific mistakes that AI could make “in a lethal area,” including not engaging a target that it should have engaged or engaging civilian targets and U.S. military assets and allies. These possibilities, he said, necessitate more defined rules for holding operators responsible when they do occur. 

“We are still going to have to find ways to manage this technology, manage its application and hold human beings accountable for when it doesn’t comply with the rules that we already have,” he added. “I think that’s the approach we need to take.”

For the time being, however, the Pentagon’s uses of AI are largely focused on processing large amounts of data for more administrative-oriented tasks.

For the time being, however, the Pentagon’s uses of AI are largely focused on processing large amounts of data for more administrative-oriented tasks.

“There are enormous possibilities here, but it is not anywhere near general human intelligence equivalents,” Kendall said, citing pattern recognition and “deep data analytics to associate things from an intelligence perspective” as AI’s most effective applications.

During a discussion last month, Schuyler Moore — the chief technology officer for U.S. Central Command — cited AI’s uneven performance and said that during military conflicts, officials “will more frequently than not put it to the side or use it in very, very select contexts where we feel very certain of the risks associated.”

But concerns still remain about how these tools will ultimately be used to enhance future warfighting capabilities, and the specific policies that are needed to enforce safeguards.

Rep. Mike Gallagher, R-Wis. — who chairs the House Select Committee on the Chinese Communist Party and was a former co-chair of the Cyberspace Solarium Commission — said “we need to have a plan for whether and how we are going to quickly adopt [AI] across multiple battlefield domains and warfighting capabilities.” 

“I’m not sure we’ve thought through that,” Gallagher added.

Article link: https://www.nextgov.com/artificial-intelligence/2023/12/human-operators-must-be-held-accountable-ais-use-conflicts-air-force-secretary-says/392457/?

Bipartisan bill strives for ‘more nimble and meaningful’ federal contracting – Nextgov

Posted by timmreardon on 12/04/2023
Posted in: Uncategorized.

By EDWARD GRAHAMJANUARY 22, 2024

Legislation from Sens. Gary Peters, D-Mich., and Joni Ernst, R-Iowa, would “streamline procedures” for both solicitation and awards by slimming down the procurement process.

A new bipartisan proposal seeks to simplify the federal contracting process — and potentially allow for more small businesses to work with the government — by reducing burdensome requirements and creating “a more nimble and meaningful bidding process and evaluation of proposals.”

The Conforming Procedures for Federal Task and Delivery Order Contracts Act was introduced by Sens. Gary Peters, D-Mich., and Joni Ernst, R-Iowa, on Jan. 19. 

The bill seeks “to streamline procedures for solicitation and the awarding of task and delivery order contracts for agencies” by shrinking “the procurement process for contractors bidding on work as well as for the government, ensuring necessary due diligence is done while allowing awards to be made faster and to a wider array of contractors, including small businesses.”

This includes reducing “duplication of documentation requirements for agencies” and applying some of the contracting measures that the Department of Defense “currently has in place to all federal agencies.”

Ernst — the ranking member of the Senate Small Business and Entrepreneurship Committee — said in a statement that “too much bureaucratic red tape stands in the way” when it comes to smaller companies effectively competing for federal contracts.

“By making the award process faster and wider, Iowa’s small businesses and entrepreneurs can better compete and succeed,” she added, referencing the benefits the bill would have for her Hawkeye State constituents. 

In a statement, Peters also said the legislation “streamlines the contracting process for federal government agencies, and as a result will boost small businesses trying to stay competitive and will increase efficiency for all government agencies, benefitting people across the nation.”

This isn’t the first time that Peters and Ernst have teamed up on legislation to improve the government’s procurement process, which is receiving renewed attention as lawmakers discuss the role that emerging technologies can play in bolstering the capabilities of federal services. 

The senators previously authored legislation, known as the PRICE Act, to “promote innovative acquisition techniques and procurement strategies” to improve the contracting process for small businesses. Their bill was signed into law in February 2022. 

Peters and Ernst also introducedlegislation in July 2022 that would require the Office of Management and Budget and the General Services Administration “to streamline the ability of the federal government to purchase commercial technology and provide specific training for information and communications technology acquisition.” 

Following a Jan. 10 Senate Homeland Security and Governmental Affairs Committee hearing on how artificial intelligence can be used to improve government services, Peters — who chairs the panel — also told Nextgov/FCW “how the federal government procures AI… is going to have a big impact on AI throughout the economy.”

“And I think that’s a very effective way for us to think about AI regulation, through the procurement process,” he said.

Article link: https://www.nextgov.com/acquisition/2024/01/bipartisan-bill-strives-more-nimble-and-meaningful-federal-contracting/393508/?

The hardware and software for the era of quantum utility is here – IBM

Posted by timmreardon on 12/04/2023
Posted in: Uncategorized.

Welcome to a new era of quantum computing. https://ibm.co/3sU23xh

It’s the first day of IBM Quantum Summit 2023, and we are thrilled to share a bevy of announcements and updates with you.

At today’s event, we’re presenting new capabilities we’re developing in order to support the next wave of quantum users: quantum computational scientists. In addition to unveiling an operational IBM Quantum System Two, we’re sharing our newest, highest-performing quantum processor yet, Heron—and demonstrating how we’re scaling quantum processors with the 1,121-qubit IBM Condor processor.

We’re also introducing Qiskit 1.0, Qiskit’s first stable release, alongside Qiskit Patterns—a framework for quantum computational scientists to do meaningful scalable work with quantum algorithms. With Qiskit Patterns, users can seamlessly create quantum algorithms and applications from a collection of foundational building blocks and execute those Patterns using heterogeneous computing infrastructure such as Quantum Serverless, now available as a beta release. We’re also deploying new execution modes so computational scientists can maximize performance from our hardware while they run utility-scale workloads.

And finally we’re sharing our new roadmap, laying out a vision for quantum computing all the way to 2033. This is the most exciting time in quantum computing to date, and we’re so proud to share with you. Head over to the IBM blog for more details.

Article link: https://www.linkedin.com/posts/ibm-quantum_ibm-quantum-system-two-activity-7137411949729869824-fRNI?

Federal Low Code SCOP Event 13 December – Federal CIO Council

Posted by timmreardon on 12/03/2023
Posted in: Uncategorized.

Calling all Federal Low Code practitioners and IT Modernization Champions!  I’m excited to share that, in partnership with GSA, we have formed the US Federal Low Code Subcommunity of Practice under the U.S. Federal Chief Information Officers (CIO) Council framework and our inaugural meeting will be held on 13 December 2023!

Key objectives for this group include:

  • Evaluating the low code landscape, including available platforms, tools, best practices, and success stories relevant to federal agencies
  • Developing guidelines, best practices, and educational resources for federal agencies to support the identification, selection, adoption and implementation of low code platforms, tools and services
  • Fostering collaboration among federal agencies, industry partners, and subject matter experts to share experiences, lessons learned, and success stories related to low code adoption
  • Identifying opportunities for pilot projects to demonstrate the value, challenges, and benefits of low code platforms in federal agencies
  • Providing inputs and recommendations to federal agencies and policymakers regarding policies, regulations, and standards that may impact the adoption and use of low code platforms
  • Engaging with low code platform and service vendors to understand their capabilities, roadmaps, and potential areas of collaboration to meet federal agency requirements

Together, our main goal is to promote the adoption and effective use of low code development methodologies and tools to accelerate digital transformation and improved citizen services across government.

Please join us on 13 December 2023 for our inaugural meeting where we will baseline current state, dive deeper into low code adoption challenges, begin to share best practices for low code platform adoption, and explore the low code product and services landscape.

Currently, we have representatives from every branch of government scheduled to attend so we are excited about the early response to this initiative to say the least!  This is an event that you will not want to miss! 

Deputy Federal Chief Information Officer, Office of Management and Budget and keynote speaker Drew Myklegard, will set the stage and then we’ll jump right in to our informative agenda.  To attend either in person in DC or virtually, please send an email (from your government domain) requesting US Federal LC SCoP membership to LCNC-subscribe-request@listserv.gsa.gov and a follow-on registration link to this event will be provided.

 Note: This kickoff event will be for Federal Government Employees and badged contractors only but we will open future events to industry partners and we will announce those as they are planned.

#digitaltransformation #lowcode #lcnc #oneteam #itmodernization

Article link: https://www.linkedin.com/posts/activity-7132757702534987776-3bSm?

AWS Unveils Next Generation AWS-Designed Chips – Businesswire

Posted by timmreardon on 12/02/2023
Posted in: Uncategorized.

AWS Graviton4 is the most powerful and energy-efficient AWS processor to date for a broad range of cloud workloads

AWS Trainium2 will power the highest performance compute on AWS for training foundation models faster and at a lower cost, while using less energy

Anthropic, Databricks, Datadog, Epic, Honeycomb, and SAP among customers using new AWS-designed chips


November 28, 2023 11:25 AM Eastern Standard Time

LAS VEGAS–(BUSINESS WIRE)–At AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company (NASDAQ: AMZN), today announced the next generation of two AWS-designed chip families—AWS Graviton4 and AWS Trainium2—delivering advancements in price performance and energy efficiency for a broad range of customer workloads, including machine learning (ML) training and generative artificial intelligence (AI) applications. Graviton4 and Trainium2 mark the latest innovations in chip design from AWS. With each successive generation of chip, AWS delivers better price performance and energy efficiency, giving customers even more options—in addition to chip/instance combinations featuring the latest chips from third parties like AMD, Intel, and NVIDIA—to run virtually any application or workload on Amazon Elastic Compute Cloud (Amazon EC2).

  • Graviton4 provides up to 30% better compute performance, 50% more cores, and 75% more memory bandwidth than current generation Graviton3 processors, delivering the best price performance and energy efficiency for a broad range of workloads running on Amazon EC2.
  • Trainium2 is designed to deliver up to 4x faster training than first generation Trainium chips and will be able to be deployed in EC2 UltraClusters of up to 100,000 chips, making it possible to train foundation models (FMs) and large language models (LLMs) in a fraction of the time, while improving energy efficiency up to 2x.

“Silicon underpins every customer workload, making it a critical area of innovation for AWS,” said David Brown, vice president of Compute and Networking at AWS. “By focusing our chip designs on real workloads that matter to customers, we’re able to deliver the most advanced cloud infrastructure to them. Graviton4 marks the fourth generation we’ve delivered in just five years, and is the most powerful and energy efficient chip we have ever built for a broad range of workloads. And with the surge of interest in generative AI, Tranium2 will help customers train their ML models faster, at a lower cost, and with better energy efficiency.”

Graviton4 raises the bar on price performance and energy efficiency for a broad range of workloads

Today, AWS offers more than 150 different Graviton-powered Amazon EC2 instance types globally at scale, has built more than 2 million Graviton processors, and has more than 50,000 customers—including the top 100 EC2 customers—using Graviton-based instances to achieve the best price performance for their applications. Customers including Datadog, DirecTV, Discovery, Formula 1 (F1), NextRoll, Nielsen, Pinterest, SAP, Snowflake, Sprinklr, Stripe, and Zendesk use Graviton-based instances to run a broad range of workloads, such as databases, analytics, web servers, batch processing, ad serving, application servers, and microservices. As customers bring larger in-memory databases and analytics workloads to the cloud, their compute, memory, storage, and networking requirements increase. As a result, they need even higher performance and larger instance sizes to run these demanding workloads, while managing costs. Furthermore, customers want more energy-efficient compute options for their workloads to reduce their impact on the environment. Graviton is supported by many AWS managed services, including Amazon Aurora, Amazon ElastiCache, Amazon EMR, Amazon MemoryDB, Amazon OpenSearch, Amazon Relational Database Service (Amazon RDS), AWS Fargate, and AWS Lambda, bringing Graviton’s price performance benefits to users of those services.

Graviton4 processors deliver up to 30% better compute performance, 50% more cores, and 75% more memory bandwidth than Graviton3. Graviton4 also raises the bar on security by fully encrypting all high-speed physical hardware interfaces. Graviton4 will be available in memory-optimized Amazon EC2 R8g instances, enabling customers to improve the execution of their high-performance databases, in-memory caches, and big data analytics workloads. R8g instances offer larger instance sizes with up to 3x more vCPUs and 3x more memory than current generation R7g instances. This allows customers to process larger amounts of data, scale their workloads, improve time-to-results, and lower their total cost of ownership. Graviton4-powered R8g instances are available today in preview, with general availability planned in the coming months. To learn more about Graviton4-based R8g instances, visit aws.amazon.com/ec2/instance-types/r8g.

EC2 UltraClusters of Trainum2 are designed to deliver the highest performance, most energy efficient AI model training infrastructure in the cloud

The FMs and LLMs behind today’s emerging generative AI applications are trained on massive datasets. These models make it possible for customers to completely reimagine user experiences through the creation of a variety of new content, including text, audio, images, video, and even software code. The most advanced FMs and LLMs today range from hundreds of billions to trillions of parameters, requiring reliable high-performance compute capacity capable of scaling across tens of thousands of ML chips. AWS already provides the broadest and deepest choice of Amazon EC2 instances featuring ML chips, including the latest NVIDIA GPUs, Trainium, and Inferentia2. Today, customers including Databricks, Helixon, Money Forward, and the Amazon Search team use Trainium to train large-scale deep learning models, taking advantage of Trainium’s high performance, scale, reliability, and low cost. But even with the fastest accelerated instances available today, customers want more performance and scale to train these increasingly sophisticated models faster, at a lower cost, while simultaneously reducing the amount of energy they use.

Trainium2 chips are purpose-built for high performance training of FMs and LLMs with up to trillions of parameters. Trainium2 is designed to deliver up to 4x faster training performance and 3x more memory capacity compared to first generation Trainium chips, while improving energy efficiency (performance/watt) up to 2x. Trainium2 will be available in Amazon EC2 Trn2 instances, containing 16 Trainium chips in a single instance. Trn2 instances are intended to enable customers to scale up to 100,000 Trainium2 chips in next generation EC2 UltraClusters, interconnected with AWS Elastic Fabric Adapter (EFA) petabit-scale networking, delivering up to 65 exaflops of compute and giving customers on-demand access to supercomputer-class performance. With this level of scale, customers can train a 300-billion parameter LLM in weeks versus months. By delivering the highest scale-out ML training performance at significantly lower costs, Trn2 instances can help customers unlock and accelerate the next wave of advances in generative AI. To learn more about Trainum, visit aws.amazon.com/machine-learning/trainium/.

A leading advocate for the responsible deployment of generative AI, Anthropic is an AI safety and research company that creates reliable, interpretable, and steerable AI systems. An AWS customer since 2021, Anthropic recently launched Claude–an AI assistant focused on being helpful, harmless, and honest. “Since launching on Amazon Bedrock, Claude has seen rapid adoption from AWS customers,” said Tom Brown, co-founder of Anthropic. “We are working closely with AWS to develop our future foundation models using Trainium chips. Trainium2 will help us build and train models at a very large scale, and we expect it to be at least 4x faster than first generation Trainium chips for some of our key workloads. Our collaboration with AWS will help organizations of all sizes unlock new possibilities, as they use Anthropic’s state-of-the-art AI systems together with AWS’s secure, reliable cloud technology.”

More than 10,000 organizations worldwide—including Comcast, Condé Nast, and over 50% of the Fortune 500—rely on Databricks to unify their data, analytics, and AI. “Thousands of customers have implemented Databricks on AWS, giving them the ability to use MosaicML to pre-train, finetune, and serve FMs for a variety of use cases,” said Naveen Rao, vice president of Generative AI at Databricks. “AWS Trainium gives us the scale and high performance needed to train our Mosaic MPT models, and at a low cost. As we train our next generation Mosaic MPT models, Trainium2 will make it possible to build models even faster, allowing us to provide our customers unprecedented scale and performance so they can bring their own generative AI applications to market more rapidly.”

Datadog is an observability and security platform that provides full visibility across organizations. “At Datadog, we run tens of thousands of nodes, so balancing performance and cost effectiveness is extremely important. That’s why we already run half of our Amazon EC2 fleet on Graviton,” said Laurent Bernaille, principal engineer at Datadog. “Integrating Graviton4-based instances into our environment was seamless, and gave us an immediate performance boost out of the box, and we’re looking forward to using Graviton4 when it becomes generally available.”

Epic is a leading interactive entertainment company and provider of 3D engine technology. Epic operates Fortnite, one of the world’s largest games with over 350 million accounts and 2.5 billion friend connections. “AWS Graviton4 instances are the fastest EC2 instances we’ve ever tested, and they are delivering outstanding performance across our most competitive and latency sensitive workloads,” said Roman Visintine, lead cloud engineer at Epic. “We look forward to using Graviton4 to improve player experience and expand what is possible within Fortnite.”

Honeycomb is the observability platform that enables engineering teams to find and solve problems they couldn’t before. “We are thrilled to have evaluated AWS Graviton4-based R8g instances,” said Liz Fong-Jones, Field CTO at Honeycomb. “In recent tests, our Go-based OpenTelemetry data ingestion workload required 25% fewer replicas on the Graviton4-based R8g instances compared to Graviton3-based C7g/M7g/R7g instances—and additionally achieved a 20% improvement in median latency and 10% improvement in 99th percentile latency. We look forward to leveraging Graviton4-based instances once they become generally available.”

SAP HANA Cloud, SAP’s cloud-native in-memory database, is the data management foundation of SAP Business Technology Platform (SAP BTP). “Customers rely on SAP HANA Cloud to run their mission-critical business processes and next-generation intelligent data applications in the cloud,” said Juergen Mueller, CTO and member of the Executive Board of SAP SE. “As part of the migration process of SAP HANA Cloud to AWS Graviton-based Amazon EC2 instances, we have already seen up to 35% better price performance for analytical workloads. In the coming months, we look forward to validating Graviton4, and the benefits it can bring to our joint customers.”

About Amazon Web Services

Since 2006, Amazon Web Services has been the world’s most comprehensive and broadly adopted cloud. AWS has been continually expanding its services to support virtually any workload, and it now has more than 240 fully featured services for compute, storage, databases, networking, analytics, machine learning and artificial intelligence (AI), Internet of Things (IoT), mobile, security, hybrid, virtual and augmented reality (VR and AR), media, and application development, deployment, and management from 102 Availability Zones within 32 geographic regions, with announced plans for 15 more Availability Zones and five more AWS Regions in Canada, Germany, Malaysia, New Zealand, and Thailand. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—trust AWS to power their infrastructure, become more agile, and lower costs. To learn more about AWS, visit aws.amazon.com.

About Amazon

Amazon is guided by four principles: customer obsession rather than competitor focus, passion for invention, commitment to operational excellence, and long-term thinking. Amazon strives to be Earth’s Most Customer-Centric Company, Earth’s Best Employer, and Earth’s Safest Place to Work. Customer reviews, 1-Click shopping, personalized recommendations, Prime, Fulfillment by Amazon, AWS, Kindle Direct Publishing, Kindle, Career Choice, Fire tablets, Fire TV, Amazon Echo, Alexa, Just Walk Out technology, Amazon Studios, and The Climate Pledge are some of the things pioneered by Amazon. For more information, visit amazon.com/about and follow @AmazonNews.

Contacts

Amazon.com, Inc.
Media Hotline
Amazon-pr@amazon.com
www.amazon.com/pr

Article link: https://www.businesswire.com/news/home/20231128145465/en/AWS-Unveils-Next-Generation-AWS-Designed-Chips

Army moving away from compliance-based cybersecurity

Posted by timmreardon on 11/30/2023
Posted in: Uncategorized.

As the Army modernizes its network, it is looking at evolving the way it protects and defends critical IT and cyber terrain.

BYMARK POMERLEAU

NOVEMBER 30, 2023

As the Army modernizes its network, it is looking to emphasize cybersecurity operations as the next step in maturity, moving beyond compliance.

Officials have described becoming more proactive against cyber threats as opposed to a reactive posture, which involves enhancing the training and abilities of the signal corps, improving policies, and developing new concepts and capabilities such as the central delivery of services.

“We’ve been doing cybersecurity operations, but it’s been exceedingly compliance based. Meaning, fill out the checklist … you’re cyber secure. Against a thinking adversary, we know that won’t work,” Lt. Gen. John Morrison, deputy chief of staff, G6, said in an interview. “We’re really shifting from a compliance-based approach to really be active in cybersecurity operations. That is the big shift that I think you’re seeing not just inside the Army, but across the entire Department of Defense … I think the reason that we pound on cybersecurity operations is really making sure that folks know that we are transitioning from a compliance-based, very passive approach to cybersecurity and rapidly moving to something that’s much more active in the day to day.”

The Army has been on a multiyear journey to mature its network, consolidating the various instantiations from the tactical level and the enterprise to create what the service calls the unified network that soldiers can access all over the world regardless of theater or echelon.

As part of this push, the Army wants to better integrate the functions of cybersecurity and cybersecurity operations — which in some circles are thought of as defensive cyber ops that seek to be more proactive and hunt malicious activity on the network rather than being more reactive to threats.

“This thing that we called cybersecurity operations, really does bleed over into what is defensive cyber operations. I think the big thing that it does is it starts focusing us on being less focused on the administrivia of the day and work focused on the technical risks,” Leonel Garciga, the Army’s chief information officer, told DefenseScoop. “I think that’s really what it boils down to and that’s the distinction. It’s how do we start moving in a direction where we’re more holistically focused on understanding the data that’s being delivered on the network, right, the unified network, and being able to react to that data, whether it be from a threat, or a status of our posture. That’s different.”

Bucketing these notions in this way allows the Army to begin to reduce complexity.

“It allows us to take a look at, okay, so for the basic, the threat agnostic, defensive of our network — read cybersecurity operations — then if we layer that across the unified network, we’re able to now layer in capabilities and force structure, right, so we can put complexity in the right spot,” Morrison said.

One of the major efforts associated with the unified network approach is moving complexity from lower echelons so they can focus on warfighting, not getting their communications or IT established.

Part of that is centralizing the delivery of services and capabilities like Unified Security Incident and Event Monitoring, which aims to provide end-to-end network visibility across all echelons, spanning the strategic enterprise level all the way to tactical formations.

“By moving towards this notion of unified net ops and defense capabilities, we’re now able to layer that on echelon that, quite frankly, we had not been able to see at any other time. We introduced new capabilities like Unified Security Incident and Event Monitoring, that now go across all echelons from strategic, operational, down to the tactical edge, where everybody can see the same thing and then the person with the time to act on it, can then act on it,” Garciga said. “It helps us from a budgetary perspective, it’s going to help us from how we actually organize our forces to conduct cybersecurity operations. And then it’s, quite frankly, going to take that complexity off the edge and give it to folks that actually have time to manage.”

Garciga noted that efforts to modernize the network — whether it’s Risk Management Framework 2.0, new software or new policies — change the nature of cybersecurity itself and thus change the skill sets that are needed.

“As we look moving forward and getting not reactive, but proactive against cyber threats and you’re starting to see that scale out to the traditional part of the Signal Corps and how we deliver services — and that’s an important distinction and change that’s happening as we move to the Army in 2030,” Garciga said. “In many ways as we are moving forward, right, taking this traditional approach to cybersecurity and operationalizing it, is, in effect, increasing the size of what we would call that defensive cyber operating force.”

Morrison described upskilling and reskilling efforts that the Army needs to look hard at for both the military and civilian side of the workforce.

“We’re just in the nascent stages of changing several of our specialties over to be data engineers to really start helping bring all that together at the strategic, operational and tactical spaces,” he said. “Training is really, really, really important. We got the depth we need right now, but I will tell you as this gets more and more inculcated across our Army, we need to build that technical depth across all of our formations.”

Article link: https://defensescoop.com/2023/11/30/army-moving-away-from-compliance-based-cybersecurity/?

Big Companies Find a Way to Identify A.I. Data They Can Trust – NYT

Posted by timmreardon on 11/30/2023
Posted in: Uncategorized.
Thi Montalvo, a data scientist at Transcarent, sees the potential for significant time savings from using the Data & Trust Alliance’s labeling standards in A.I. projects.Credit…Rachel Woolf for The New York Times

By Steve Lohr

Steve Lohr has covered data and software for more than 20 years.

Nov. 30, 2023, 6:00 a.m. ET

Data is the fuel of artificial intelligence. It is also a bottleneck for big businesses, because they are reluctant to fully embrace the technology without knowing more about the data used to build A.I. programs.

Now, a consortium of companies has developed standards for describing the origin, history and legal rights to data. The standards are essentially a labeling system for where, when and how data was collected and generated, as well as its intended use and restrictions.

The data provenance standards, announced on Thursday, have been developed by the Data & Trust Alliance, a nonprofit group made up of two dozen mainly large companies and organizations, including American Express, Humana, IBM, Pfizer, UPS and Walmart, as well as a few start-ups.

The alliance members believe the data-labeling system will be similar to the fundamental standards for food safety that require basic information like where food came from, who produced and grew it and who handled the food on its way to a grocery shelf.

Greater clarity and more information about the data used in A.I. models, executives say, will bolster corporate confidence in the technology. How widely the proposed standards will be used is uncertain, and much will depend on how easy the standards are to apply and automate. But standards have accelerated the use of every significant technology, from electricity to the internet.

“This is a step toward managing data as an asset, which is what everyone in industry is trying to do today,” said Ken Finnerty, president for information technology and data analytics at UPS. “To do that, you have to know where the data was created, under what circumstances, its intended purpose and where it’s legal to use or not.”

Surveys point to the need for greater confidence in data and for improved efficiency in data handling. In one poll of corporate chief executives, a majority cited “concerns about data lineage or provenance” as a key barrier to A.I. adoption. And a survey of data scientists found that they spent nearly 40 percent of their time on data preparation tasks.

The data initiative is mainly intended for business data that companies use to make their own A.I. programs or data they may selectively feed into A.I. systems from companies like Google, OpenAI, Microsoft and Anthropic. The more accurate and trustworthy the data, the more reliable the A.I.-generated answers.

For years, companies have been using A.I. in applications that range from tailoring product recommendations to predicting when jet engines will need maintenance.

But the rise in the past year of the so-called generative A.I. that powers chatbots like OpenAI’s ChatGPT has heightened concerns about the use and misuse of data. These systems can generate text and computer code with humanlike fluency, yet they often make things up — “hallucinate,” as researchers put it — depending on the data they access and assemble.

Companies do not typically allow their workers to freely use the consumer versions of the chatbots. But they are using their own data in pilot projects that use the generative capabilities of the A.I. systems to help write business reports, presentations and computer code. And that corporate data can come from many sources, including customers, suppliers, weather and location data.

“The secret sauce is not the model,” said Rob Thomas, IBM’s senior vice president of software. “It’s the data.”

In the new system, there are eight basic standards, including lineage, source, legal rights, data type and generation method. Then there are more detailed descriptions for most of the standards — such as noting that the data came from social media or industrial sensors, for example.

The data documentation can be done in a variety of widely used technical formats. Companies in the data consortium have been testing the standards to improve and refine them, and the plan is to make them available to the public early next year.

Labeling data by type, date and source has been done by individual companies and industries. But the consortium says these are the first detailed standards meant to be used across all industries.

“My whole life I’ve spent drowning in data and trying to figure out what I can use and what is accurate, ” said Thi Montalvo, a data scientist and vice president of reporting and analytics at Transcarent.

Transcarent, a member of the data consortium, is a start-up that relies on data analysis and machine-learning models to personalize health care and speed payment to providers.

The benefit of the data standards, Ms. Montalvo said, comes from greater transparency for everyone in the data supply chain. That work flow often begins with negotiating contracts with insurers for access to claims data and continues with the start-up’s data scientists, statisticians and health economists who build predictive models to guide treatment for patients.

At each stage, knowing more about the data sooner should increase efficiency and eliminate repetitive work, potentially reducing the time spent on data projects by 15 to 20 percent, Ms. Montalvo estimates.

The data consortium says the A.I. market today needs the clarity the group’s data-labeling standards can provide. “This can help solve some of the problems in A.I. that everyone is talking about,” said Chris Hazard, a co-founder and the chief technology officer of Howso, a start-up that makes data-analysis tools and A.I. software.

Steve Lohr

Steve Lohr covers technology, economics and work force issues. He was part of the team awarded the Pulitzer Prize for explanatory reporting in 2013. More about Steve Lohr

Article link: https://www.nytimes.com/2023/11/30/business/ai-data-standards.html

8 Data Provenance Standards to foster Trust in Data and AI

Posted by timmreardon on 11/30/2023
Posted in: Uncategorized.

Based on work from experts across nineteen leading enterprises including IBM, the Data & Trust Alliance announced eight proposed data provenance standards to help foster trust in data and #AI.

Learn how these cross-industry standards aim to bring transparency to the origin of #data, including data used to train AI models: https://ibm.co/3R4Yyfc

Article link: https://www.linkedin.com/posts/ibmdata_ai-data-activity-7135990653766823936-0wa6?

Posts navigation

← Older Entries
Newer Entries →
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • WHAT A QUBIT IS AND WHAT IT IS NOT. 01/25/2026
    • Governance Before Crisis We still have time to get this right. 01/21/2026
    • On the Eve of Davos: We’re Just Arguing About the Wrong Thing 01/18/2026
    • Are AI Companies Actually Ready to Play God? – RAND 01/17/2026
    • ChatGPT Health Is a Terrible Idea 01/09/2026
    • Choose the human path for AI – MIT Sloan 01/09/2026
    • Why AI predictions are so hard – MIT Technology Review 01/07/2026
    • Will AI make us crazy? – Bulletin of the Atomic Scientists 01/04/2026
    • Decisions about AI will last decades. Researchers need better frameworks – Bulletin of the Atomic Scientists 12/29/2025
    • Quantum computing reality check: What business needs to know now – MIT Sloan 12/29/2025
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • January 2026 (8)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
healthcarereimagined
Blog at WordPress.com.
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...