healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

China Unveils First Homegrown AI PC Processor – Techovedas

Posted by timmreardon on 08/03/2024
Posted in: Uncategorized.

EDITORIAL TEAM

AUGUST 1, 2024

INTERNATIONAL, SEMICONDUCTOR NEWS

    Introduction

    In a notable advancement for China’s technology sector, Cixin Technology has launched the Cixin P1, the country’s first domestically developed “AI PC”  processor.

    This significant achievement underscores China’s growing capabilities in high-performance computing and artificial intelligence (AI). 

    Purpose-built for  AI tasks:The Cixin P1 is specifically designed to handle the demanding computational requirements of  artificial intelligence applications.

    Potential for high performance:While specific benchmarks and performance metrics are yet to be fully disclosed, the processor is expected to deliver impressive results in AI-related workloads.

    Domestic chip manufacturing:This development is a crucial step in China’s efforts to reduce reliance on foreign chipmakers and achieve self-sufficiency in the semiconductor industry.

    With its state-of-the-art specifications and competitive features, the Cixin P1 is poised to make a substantial impact on the global semiconductor landscape.

    Introducing the Cixin P1

    Cixin Technology’s latest innovation, the Cixin P1, represents a major leap forward in China’s efforts to achieve technological self-sufficiency. This processor is based on ARM architecture, a design that has become increasingly popular in modern computing. 

    China has unveiled the Cixin P1, its first domestically developed AI PC processor. With a 12-core ARM architecture, 45 TOPS NPU, and support for up to 64GB of LPDDR5 memory, the Cixin P1 promises to rival global competitors.

    Key Specifications

    The Cixin P1 boasts a 12-core configuration, divided into eight performance cores and four efficiency cores. This setup allows for robust multitasking and high-performance computing. 

    The  processor can reach a boost clock speed of up to 3.2 GHz, providing significant power for demanding applications. 

    It is manufactured using a 6nm process node, which ensures improved efficiency and performance compared to older process technologies.

    AI Capabilities and Performance

    A standout feature of the Cixin P1 is its integrated Neural  Processing Unit (NPU), which delivers up to 45 TOPS (trillions of operations per second) in  AIperformance. This places it on par with some of the latest AI-focused processors from global competitors.

    The NPU is designed to handle intensive AI tasks, making the Cixin P1 well-suited for applications in machine learning, computer vision, and natural language processing.

    The AI capabilities of the Cixin P1 are expected to drive innovation in various sectors, including smart devices, autonomous systems, and data analytics.

    Its ability to process complex AI workloads efficiently could position it as a key player in the rapidly evolving AI landscape.

    Memory and Display Support

    The Cixin P1 supports up to 64GB of LPDDR5-6400 memory, providing ample bandwidth for high-performance computing tasks. 

    This is particularly advantageous for applications requiring substantial memory capacity, such as high-definition video processing and gaming.

    Additionally, the processor can drive a 4K 120Hz display, enabling smooth and high-resolution visuals. 

    This feature is essential for applications that demand high-definition output, such as virtual reality (VR) and advanced gaming.

    Connectivity and Expansion

    The Cixin P1 includes support for PCIe Gen4, which offers faster data transfer rates compared to its predecessors. 

    This enhancement is crucial for applications requiring high-speed data access and storage.

    The  processor also features USB-C support, although it remains unclear whether this is limited to USB 3.2 or includes the more advanced USB4 standard. 

    USB-C connectivity is becoming increasingly important for versatile and high-speed data transfer, and clarity on this feature will be important for users and developers.

    Significance and Market Impact

    The Cixin P1’s launch is a landmark event for China’s semiconductor industry. The ability to develop and manufacture advanced processors domestically reduces the country’s reliance on foreign technology, enhancing its technological sovereignty.

    This is particularly significant in the context of ongoing geopolitical tensions and trade restrictions, especially those involving the United States.

    The Cixin P1 is expected to have a broad range of applications, including in consumer electronics, industrial automation, and data centers.

    Its high-performance specifications make it an attractive option for companies and industries looking to leverage advanced computing and  AIcapabilities.

    Challenges and Future Outlook

    While the Cixin P1 represents a significant technological achievement, several challenges remain. One of the primary concerns is the lack of comprehensive performance data. 

    As of now, there are no detailed benchmarks available to fully assess the processor’s real-world performance. This data will be crucial in determining the processor’s competitiveness in the global market.

    Additionally, Cixin Technology will need to ensure that the Cixin P1 is supported by a robust software ecosystem. 

    Software compatibility and support are critical factors in the widespread adoption of new processors, and the company will need to address this to ensure the success of the Cixin P1.

    Market perception will also play a role in the  processor’s success. Gaining the trust of global consumers and businesses will require effective branding and marketing strategies. 

    The Cixin P1’s performance, combined with strategic positioning, will be key to its acceptance and success in the international market.

    Conclusion

    The Cixin P1 is a groundbreaking development for China’s semiconductor industry, showcasing the country’s growing capabilities in AI and advanced computing technologies. 

    With its impressive specifications and potential applications, the Cixin P1 has the potential to make a significant impact on the global processor market.

    As more details emerge and performance benchmarks become available, the Cixin P1 could redefine the landscape of AI processors, offering a competitive alternative to established global players. 

    The launch of this processor highlights China’s commitment to innovation and technological advancement, setting the stage for further developments in the semiconductor industry.

    For ongoing updates and detailed analyses of the Cixin P1 and other technological advancements, stay tuned to our coverage.

    Article link: https://techovedas.com/china-unveils-first-homegrown-ai-pc-processor/

    Cost overruns, delays plague VA’s new integrated financial management system – Nextgov

    Posted by timmreardon on 07/31/2024
    Posted in: Uncategorized.

    By EDWARD GRAHAMJULY 24, 2024

    The rollout of VA’s modernized financial management and acquisition system has been affected by delays in the department’s new electronic health record system, since “multiple deployments” depend on the EHR’s launch at medical facilities.

    The Department of Veterans Affairs has followed “leading practices” as it works to modernize its outdated financial management system but has still faced cost overruns and scheduling delays, the Government Accountability Office said in a report released on Tuesday. 

    VA’s financial management system — which helps administer benefits programs for veterans and their beneficiaries — is more than 30 years old, with department officials complaining that it is inefficient and difficult to maintain. 

    GAO noted that VA previously attempted to replace the legacy system twice since 1998, but that those efforts “failed after years of development and hundreds of millions of dollars in cost.”

    Hoping to right these aborted efforts, VA launched a new initiative in 2016 to create an integrated system for both its financial management and acquisition systems. The new network, known as the Integrated Financial and Acquisition Management System, or iFAMS, is expected to serve as “an enterprise resource planning cloud solution.”

    GAO noted, however, that the initiative has faced growing cost overruns since its conception. 

    “Total estimated iFAMS implementation costs increased from $2.5 billion for its 2019 life cycle cost estimate to $7.5 billion for its 2022 life cycle cost estimate,” the report said, although it noted that “nearly half the cost increase from 2019 to 2022 is due to including 18 years of additional operations and support costs for the full iFAMS projected useful life.”

    After conducting this estimate, GAO found that the system’s “October 2023 cost estimate increased by approximately $200 million over the 2022 estimate to $7.7 billion.”

    VA officials told the watchdog the increase was the result of “additional projected contract costs for its business intelligence reporting tool and increases in current contract cost for program deployments.”

    Although VA estimated that the new system would be fully implemented by 2030, GAO’s analysis also warned that “this date is questionable,” since officials have “not yet determined final implementation dates for multiple deployments at [the Veterans Benefits Administration] and Veterans Health Administration that affect its timeline.” In 2020, VA officials said iFAMS would be deployed by 2028.

    Another factor affecting the iFAMS implementation schedule is that “multiple deployments depend on other currently paused or delayed VA IT modernization efforts, such as Electronic Health Record Modernization.”

    VA’s effort to implement a new Oracle Cerner EHR system at all of its medical facilities has also faced its own cost overruns and technical challenges. The department implemented a “reset period” last year that paused most deployments of the new software, which has only been rolled out at six medical facilities since 2020. 

    GAO also reiterated prior recommendations it made for VA to establish “reliable cost and schedule estimates” and develop “target values for customer experience metrics” to measure progress over time. 

    Although GAO found that “VA’s risk management policies and procedures were consistent with leading practices,” it also recommended that officials take further steps to “develop more comprehensive risk response plans to help mitigate risks related to systems integration with other IT modernization projects.”

    VA concurred with the watchdog’s recommendation and said it also submitted documents to GAO outlining its goals for operational and customer experience metrics.

    Article link: https://www.nextgov.com/modernization/2024/07/cost-overruns-delays-plague-vas-new-integrated-financial-management-system/398304/?

    AI Companies Say Safety Is a Priority. It’s Not – RAND

    Posted by timmreardon on 07/28/2024
    Posted in: Uncategorized.

    COMMENTARY Jul 9, 2024

    By Douglas Yeung

    This commentary originally appeared on San Francisco Chronicle on July 9, 2024. 

    It could save us or it could kill us.

    That’s what many of the top technologists in the world believe about the future of artificial intelligence. This is why companies like OpenAI emphasize their dedication to seemingly conflicting goals: accelerating technological progress as rapidly—but also as safely—as possible.

    It’s a laudable intention, but not one of these many companies seems to be succeeding.

    Take OpenAI, for example. The leading AI company in the world believes the best approach to building beneficial technology is to ensure that its employees are “perfectly aligned” with the organization’s mission. That sounds reasonable except what does it mean in practice?

    A lot of groupthink—and that is dangerous.

    As social animals, it’s natural for us to form groups or tribes to pursue shared goals. But these groups can grow insular and secretive, distrustful of outsiders and their ideas. Decades of psychological research have shown how groups can stifle dissent by punishing or even casting out dissenters. In the 1986 Challenger space shuttle explosion, engineers expressed safety concerns about the rocket boosters in freezing weather. Yet the engineers were overruled by their leadership, who may have felt pressure to avoid delaying the launch.

    It could save us or it could kill us. That’s what many of the top technologists in the world believe about the future of artificial intelligence.Share on Twitter

    According to a group of AI insiders, something similar is taking place at OpenAI. According to an open letter signed by nine current and former employees, the company uses hardball tactics to stifle dissent from workers about their technology. One of the researchers who signed the letter described the company as “recklessly racing” for dominance in the field.

    It’s not just happening at OpenAI. Earlier this year, an engineer at Microsoft grew concerned that the company’s AI tools were generating violent and sexual imagery. He first tried to get the company to pull them off the market but when that didn’t work, he went public. Then, he said, Microsoft’s legal team demanded he delete the LinkedIn post. In 2021, former Facebook project manager Frances Haugen revealed internal research that showed the company knew the algorithms—often referred to as the building blocks of AI—that Instagram used to surface content for young users were exposing teen girls to images that were harmful to their mental health. When asked in an interview with “60 Minutes” why she spoke out, Haugen responded, “Person after person after person has tackled this inside of Facebook and ground themselves to the ground.”

    Leaders at AI companies claim they have a laser focus on ensuring that their products are safe. They have, for example, commissioned research, set up “trust and safety” teams, and even started new companies to help achieve these aims. But these claims are undercut when insiders paint a familiar picture of a culture of negligence and secrecy that—far from prioritizing safety—instead dismisses warnings and hides evidence about unsafe practices, whether to preserve profits, avoid slowing progress, or simply to spare the feelings of leaders. 

    So what can these companies do differently?

    As a first step, AI companies could ban nondisparagement or confidentiality clauses. The OpenAI whistleblowers asked for that in their open letter and the company says it has already taken such steps. But removing explicit threats of punishment isn’t enough if an insular workplace culture continues to implicitly discourage concerns that might slow progress.

    Rather than simply allowing dissent, tech companies could encourage it, putting more options on the table. This could involve, say, beefing up the “bug bounty” programs that tech companies already use to reward employees and customers who identify flaws in their software. Companies could embed a “devil’s advocate” role inside software or policy teams that would be charged with opposing consensus positions.

    AI companies might also learn from how other highly skilled, mission-focused teams avoid groupthink. Military special operations forces prize group cohesion but recognize that cultivating dissent—from anyone, regardless of rank or role—might prove the difference between life and death. For example, Army doctrine—fundamental principles of military organizations—emphasizes (PDF) that special operations forces must know how to employ small teams and individuals as autonomous actors.

    Finally, organizations already working to make AI models more transparent could shed light on their inner workings. Secrecy has been ingrained in how many AI companies operate; rebuilding public trustcould require pulling back that curtain by, for example, more clearly explaining safety processes or publicly responding to criticism.

    With AI, the stakes of silencing those who don’t toe the company line, instead of viewing them as vital sources of mission-critical information, are too high to ignore.Share on Twitter

    To be sure, group decisionmaking can benefit (PDF) from pooling information or overcoming individual biases, but too often it results in overconfidence or conforming to group norms. With AI, the stakes of silencing those who don’t toe the company line, instead of viewing them as vital sources of mission-critical information, are too high to ignore.

    It’s human nature to form tribes—to want to work with and seek support from a tight group of like-minded people. It’s also admirable, if grandiose, to adopt as one’s mission nothing less than building tools to tackle humanity’s greatest challenges. But AI technologies will likely fall short of that lofty goal—rapid yet responsible technological advancement—if its developers fall prey to a fundamental human flaw: refusing to heed hard truths from those who would know.

    Douglas Yeung is a senior behavioral scientist at RAND and a member of the Pardee RAND Graduate School faculty

    Article link: https://www.rand.org/pubs/commentary/2024/07/ai-companies-say-safety-is-a-priority-its-not.html?

    Acquisition officials highlight need for transparency in AI discussions with industry – Fedscoop

    Posted by timmreardon on 07/27/2024
    Posted in: Uncategorized.

    Federal government acquisition officials from GSA and NASA said transparency is key in discussions about purchasing artificial intelligence technologies.

    BYMADISON ALDER

    JUNE 21, 2024

    Transparency about what artificial intelligence technologies can actually do is key to conversations about the government potentially purchasing the technology, two government acquisition officials said Thursday.

    Officials from the General Services Administration and NASA underscored the need for honest conversations and updated ways of thinking about contracts in a panel discussion about keeping pace with innovations in government technology purchasing. That discussion, during a Professional Services Council event on federal acquisition, focused heavily on purchasing AI, whose boom in popularity has also reverberated throughout the government.

    “What I’m seeing as a buyer of this type of technology is I’m being sold the world, and when I go to look at it, it’s not really the world. It’s this little dirt path on the corner,” said Geoff Sage, director of the Enterprise Service and Analysis Division in NASA’s Office of Procurement. 

    Sage noted that generative AI is “changing the game every single day,” so something that’s important for his agency is the ability to “take baby steps to prove out a bigger concept.” Those efforts can be learning opportunities, he said.

    Udaya Patnaik, chief innovation officer for the Office of IT Category in GSA’s Federal Acquisition Service, said the challenge with trying to “wrangle a constantly evolving technology” is that the capabilities of that technology aren’t clear. 

    “That requires a level of transparency between industry and government to really say, ‘look, this is what we know, and this is what we don’t know,’” Patnaik said. For example, he said industry needs to be able to identify where a model comes from, the data it’s trained on and the biases that could exist in the system. 

    The discussion comes as the Biden administration and members of Congress are looking at ways to address how the government purchases AI. The Office of Management and Budget recently solicited information from the publicto inform its work to ensure procurement of AI by federal agencies is responsible. A bipartisan Senate billwould mandate that agencies assess the risks of the technology before purchasing and using them.

    In addition to transparency, Patnaik also said it’s important to look at contracts “openly” because the way that AI or machine learning technologies from 10 or 15 years ago used to be acquired isn’t relevant anymore. 

    That requires “an unprecedented level of real tight coordination and conversation between the acquisition community, the legal community, and the technical community to really understand what’s there and what’s not,” Patnaik said.

    With respect to older methods of buying, Sage similarly said “we need to be more innovative.” 

    Due to the proliferation of the technology in different areas, he explained that there is heightened focus on topics that come with generative AI such as data rights and copyright infringement.

    Sage said NASA has been pushing for early and open communications internally that include the office of the chief information officer, lawyers, and technical professionals from day one.

    In an interview with press at the same event, PSC President and CEO David Berteau said keeping pace with the speed of the technology’s rapid evolution and evaluating results are “two competing dynamics” that the White House has to focus on in its action.

    “How do you pace the government’s incorporation with the pace of development of technology is the first key question. The second is, what’s it worth?” Berteau said.

    He said that it’s not like code where there was a methodology for creating a proposal and estimating how much it would cost to write lines of code. “Now it looks like it’s almost instantaneous, but may be exactly worth nothing,” Berteau said.

    Article link: https://fedscoop.com/acquisition-officials-highlight-need-transparency-ai-industry/

    AI trained on AI garbage spits out AI garbage – MIT Technology Review

    Posted by timmreardon on 07/25/2024
    Posted in: Uncategorized.

    As junk web pages written by AI proliferate, the models that rely on that data will suffer.

    By 

    • Scott J Mulliganarchive page

    July 24, 2024

    AI models work by training on huge swaths of data from the internet. But as AI is increasingly being used to pump out web pages filled with junk content, that process is in danger of being undermined.

    New research published in Natureshows that the quality of the model’s output gradually degrades when AI trains on AI-generated data. As subsequent models produce output that is then used as training data for future models, the effect gets worse.  

    Ilia Shumailov, a computer scientist from the University of Oxford, who led the study, likens the process to taking photos of photos. “If you take a picture and you scan it, and then you print it, and you repeat this process over time, basically the noise overwhelms the whole process,” he says. “You’re left with a dark square.” The equivalent of the dark square for AI is called “model collapse,” he says, meaning the model just produces incoherent garbage. 

    This research may have serious implications for the largest AI models of today, because they use the internet as their database. GPT-3, for example, was trained in part on data from Common Crawl, an online repository of over 3 billion web pages. And the problem is likely to get worse as an increasing number of AI-generated junk websites start cluttering up the internet. 

    Current AI models aren’t just going to collapse, says Shumailov, but there may still be substantive effects: The improvements will slow down, and performance might suffer. 

    To determine the potential effect on performance, Shumailov and his colleagues fine-tuned a large language model (LLM) on a set of data from Wikipedia, then fine-tuned the new model on its own output over nine generations. The team measured how nonsensical the output was using a “perplexity score,” which measures an AI model’s confidence in its ability to predict the next part of a sequence; a higher score translates to a less accurate model. 

    The models trained on other models’ outputs had higher perplexity scores. For example, for each generation, the team asked the model for the next sentence after the following input:

    “some started before 1360—was typically accomplished by a master mason and a small team of itinerant masons, supplemented by local parish labourers, according to Poyntz Wright. But other authors reject this model, suggesting instead that leading architects designed the parish church towers based on early examples of Perpendicular.”

    On the ninth and final generation, the model returned the following:

    “architecture. In addition to being home to some of the world’s largest populations of black @-@ tailed jackrabbits, white @-@ tailed jackrabbits, blue @-@ tailed jackrabbits, red @-@ tailed jackrabbits, yellow @-.”

    Shumailov explains what he thinks is going on using this analogy: Imagine you’re trying to find the least likely name of a student in school. You could go through every student name, but it would take too long. Instead, you look at 100 of the 1,000 student names. You get a pretty good estimate, but it’s probably not the correct answer. Now imagine that another person comes and makes an estimate based on your 100 names, but only selects 50. This second person’s estimate is going to be even further off.

    eyes going into a grinder with strings of nonsense text as output

    Junk websites filled with AI-generated text are pulling in money from programmatic ads

    More than 140 brands are advertising on low-quality content farm sites—and the problem is growing fast.

    “You can certainly imagine that the same happens with machine learning models,” he says. “So if the first model has seen half of the internet, then perhaps the second model is not going to ask for half of the internet, but actually scrape the latest 100,000 tweets, and fit the model on top of it.”

    Additionally, the internet doesn’t hold an unlimited amount of data. To feed their appetite for more, future AI models may need to train on synthetic data—or data that has been produced by AI.   

    “Foundation models really rely on the scale of data to perform well,” says Shayne Longpre, who studies how LLMs are trained at the MIT Media Lab, and who didn’t take part in this research. “And they’re looking to synthetic data under curated, controlled environments to be the solution to that. Because if they keep crawling more data on the web, there are going to be diminishing returns.”

    Matthias Gerstgrasser, an AI researcher at Stanford who authored a different paper examining model collapse, says adding synthetic data to real-world data instead of replacing it doesn’t cause any major issues. But he adds: “One conclusion all the model collapse literature agrees on is that high-quality and diverse training data is important.”

    Another effect of this degradation over time is that information that affects minority groups is heavily distorted in the model, as it tends to overfocus on samples that are more prevalent in the training data. 

    In current models, this may affect underrepresented languages as they require more synthetic (AI-generated) data sets, says Robert Mahari, who studies computational law at the MIT Media Lab (he did not take part in the research).

    One idea that might help avoid degradation is to make sure the model gives more weight to the original human-generated data. Another part of Shumailov’s study allowed future generations to sample 10% of the original data set, which mitigated some of the negative effects. 

    That would require making a trail from the original human-generated data to further generations, known as data provenance.

    But provenance requires some way to filter the internet into human-generated and AI-generated content, which hasn’t been cracked yet. Though a number of tools now exist that aim to determine whether text is AI-generated, they are often inaccurate.

    “Unfortunately, we have more questions than answers,” says Shumailov. “But it’s clear that it’s important to know where your data comes from and how much you can trust it to capture a representative sample of the data you’re dealing with.”

    Article link: https://www.technologyreview.com/2024/07/24/1095263/ai-that-feeds-on-a-diet-of-ai-garbage-ends-up-spitting-out-nonsense/

    The Blurred Reality of AI’s ‘Human-Washing’ – Wired

    Posted by timmreardon on 07/24/2024
    Posted in: Uncategorized.

    JUL 18, 2024 8:00 

    This week, we examine the trend among generative AI chatbots to flirt, stammer, and try to make us believe they’re human—a development that some researchers say crosses an ethical line.

    VOICE ASSISTANTS HAVEbecome a constant presence in our lives. Maybe you talk to Alexa or Gemini or Siri to ask a question or to perform a task. Maybe you have to do a little back and forth with a voice bot whenever you call your pharmacy, or when you book a service appointment at your car dealership. You may even get frustrated and start pleading with the robot on the other end of the line to connect you with a real human.

    That’s the catch, though: These voice bots are starting to sound a lot more like actual humans, with emotions in their voice, little ticks and giggles in between phrases, and the occasional flirty aside. Today’s voice-powered chatbots are blurring the lines between what’s real and what’s not, which prompts a complicated ethical question: Can you trust a bot that insists it’s actually human?

    https://play.prx.org/listen?ge=prx_5901_ef5d4dba-7b0d-422e-affe-77519d60320f&uf=https%3A%2F%2Fpublicfeeds.net%2Ff%2F5901%2Fgadget-lab

    This week, Lauren Goode tells us about her recent news story on a bot that was easily tricked into lying and saying it was a human. And WIRED senior writer Paresh Dave tells us how AI watchdogs and government regulators are trying to prevent natural-sounding chatbots from misrepresenting themselves.

    Show Notes

    Read more about the Bland AI chatbot, which lied and said it was human. Read Will Knight’s story about researchers’ warnings of the manipulative power of emotionally expressive chatbots.

    Recommendations

    Lauren recommends The Bee Sting by Paul Murray. (Again.) Paresh recommends subscribing to your great local journalism newsletter or Substack to stay informed about important local issues. Mike recommends Winter Journal, a memoir by Paul Auster.

    Paresh Dave can be found on social media @peard33. Lauren Goode is @LaurenGoode. Michael Calore is @snackfight. Bling the main hotline at @GadgetLab. The show is produced by Boone Ashworth (@booneashworth). Our theme music is by Solar Keys.

    How to Listen

    You can always listen to this week’s podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here’s how:

    If you’re on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts, and search for Gadget Lab. We’re on Spotifytoo. And in case you really need it, here’s the RSS feed.

    Article link: https://www.wired.com/story/gadget-lab-podcast-651/

    Report Card: Assessing Electronic Health Record Modernization at the Captain James A. Lovell Federal Health Care Center – House Committee on Veterans Affairs

    Posted by timmreardon on 07/23/2024
    Posted in: Uncategorized.

    How Chiplets are revolutionizing Semiconductor Industry? – Techovedas

    Posted by timmreardon on 07/17/2024
    Posted in: Uncategorized.

    🚀 The Pizza Party: Imagine you’re hosting a pizza party, and you want to make the perfect pizza to satisfy all your guests’ tastes. Instead of trying to bake one gigantic pizza with every imaginable topping on it, you decide to make individual slices with different toppings. Each slice represents a specialized component, or chiplet, optimized for a specific function. For example, one slice might have pepperoni for the CPU processing power, another slice might have mushrooms for graphics processing, and yet another slice might have olives for memory storage. By baking these individual slices separately and then assembling them onto a common pizza crust, you can create a customized pizza that caters to everyone’s preferences. Some guests might want more CPU power, so they’ll take more pepperoni slices. Others might prioritize graphics performance, so they’ll go for more mushroom slices. And some might want a balance of both, so they’ll choose a variety of slices. 🚀 What is a chiplet? A chiplet is a discrete, modular component of an integrated circuit (IC) that performs a specific function, such as processing, memory, or input/output (I/O). Instead of fabricating an entire semiconductor device on a single monolithic die, chiplets allow designers to split the functionality into smaller, individual components that can be manufactured separately and then integrated onto a common substrate or package. 🚀 Why Chiplets are Important? 🔵 Manufacturing Efficiency: With the ever-shrinking process nodes and increasing complexity of IC designs, manufacturing entire chips on a single die becomes challenging and costly. Chiplets enable more efficient manufacturing by allowing each component to be fabricated using the most suitable process node and technology. 🔵 Performance Optimization: Chiplets allow designers to mix and match components optimized for specific functions. For example, a CPU chiplet can be combined with specialized chiplets for graphics processing, memory, or AI acceleration, allowing for better performance and power efficiency. 🔵 Time-to-Market: Developing a new semiconductor device from scratch can take years. By using chiplets, designers can leverage pre-existing, proven components, reducing development time and speeding up time-to-market for new products. 🔵 Scalability and Flexibility: Chiplets offer scalability and flexibility in design. Manufacturers can easily scale the number of chiplets in a package to meet different performance and cost requirements without having to redesign the entire system. Cost Reduction: Chiplets can lead to cost savings in several ways, including reduced development costs, lower manufacturing costs due to improved yield rates, and increased reusability of components across different products. A detailed post is in comments. For all semiconductors and AI related content, follow TechoVedas

    Article link: https://www.linkedin.com/posts/kumar-priyadarshi-b0a2a7a2_how-chiplets-are-revolutionizing-semiconductor-activity-7219181569670803456-2DtD?

    How Chiplets Can Change the Future by extending Moore’s law – Techovedas

    Posted by timmreardon on 07/17/2024
    Posted in: Uncategorized.

    EDITORIAL TEAM

    AUGUST 15, 2023

    INTERNATIONAL, LEARN VLSI, SEMICONDUCTOR NEWS

    Introduction

    In the world of technology, innovation is a constant companion, pushing the boundaries of what’s possible. One of the latest marvels to capture the attention of tech enthusiasts and experts alike is the concept of “chiplets.” 

    Imagine these as tiny, specialized building blocks that can be combined in various ways to create powerful and efficient electronic devices. In this blog post, we’ll delve into the world of chiplets, breaking down the complex concepts into easy-to-understand terms, using a simple analogy to help even a novice grasp the potential they hold for transforming the semiconductor industry.

    “Chiplets are a game-changer for the semiconductor industry. They offer a way to improve performance, reduce power consumption, and increase design flexibility.” – 

    ~Pat Gelsinger, CEO of Intel

    The Basics: What are Chiplets?

    To put it simply, chiplets are like Lego pieces for computers. Imagine you have a collection of Lego bricks, each designed for a specific function – some are engines, some are wheels, and some are windows. You can take these different pieces and snap them together to create a custom vehicle that suits your needs. 

    Similarly, chiplets are tiny electronic components, each with its own unique functionality. These chiplets can be combined to create powerful and specialized devices, just like Lego pieces can be combined to build intricate structures.

    Read more: Microprocessors vs. Microcontrollers: A Cake Analogy

    The Lego-Like Assembly

    Think of a computer or any electronic device as a puzzle, with each piece contributing to the overall functionality. Traditionally, these pieces (or chips) were large, monolithic structures that contained all the necessary functions in one piece. 

    However, this approach had limitations. It was like trying to build an entire vehicle with just one type of Lego brick – it could work, but there was limited room for customization and optimization.

    Enter chiplets, the Lego pieces of the tech world. Instead of using one giant chip, designers now use smaller chiplets, each dedicated to a specific task. 

    Just as you’d use different types of Lego pieces for different parts of a vehicle, chiplets can be made using different manufacturing processes to optimize their performance. This allows for more efficient use of the available silicon and results in improved overall performance.

    Early days and evolution

    The term “chiplet” was coined by John Wawrzynek, a professor at the University of California, Berkeley, in 2006. However, the concept of chiplets has been around for much longer. 

    In the early 1990s, IBM developed a technology called Multichip Module (MCM) that allowed multiple chips to be interconnected on a single substrate. MCMs were used in some high-performance computing systems, but they were not widely adopted due to their high cost.

    In recent years, there has been renewed interest in chiplets due to the challenges of scaling traditional monolithic chips. As the size of transistors continues to shrink, it becomes increasingly difficult to manufacture monolithic chips with high yields. 

    Chiplets offer a potential solution to this problem, as they can be made using different manufacturing processes and then interconnected on a single substrate.

    AMD was one of the first companies to adopt chiplet technology. In 2017, the company released its  Ryzen 7 1800X processor, which uses chiplets to combine different cores and cache memory.

    AMD has since released several other products that use chiplets, including its Epyc CPUs and its Instinct GPUs.

    Intel is also working on chiplet-based designs. The company is expected to release its first chiplet-based CPU in 2023. Other companies that are working on chiplet technology include IBM, NVIDIA, and Qualcomm.

    Moore’s Law: A Quick Recap

    Moore’s Law, named after Intel co-founder Gordon Moore, observes that the number of transistors on an integrated circuit (IC) doubles approximately every two years. This exponential growth has fueled the astonishing progress in computing power over the decades. 

    However, as transistors approach atomic scales, the challenges of maintaining this pace become formidable.

    Chiplets: A Synergy with Moore’s Law

    Enter chiplets, the architects of a harmonious dance with Moore’s Law. Imagine Moore’s Law as a fast-paced race, and chiplets as versatile teammates that ensure the race continues despite hurdles. How do they achieve this?

    Enhanced Performance: Moore’s Law has propelled the integration of more transistors on a single chip, but chiplet technology takes this further. By utilizing chiplets with diverse manufacturing processes, the available silicon is utilized optimally. This synergy results in superior performance, a crucial facet in high-demand applications.

    Power Efficiency: The relentless quest to reduce power consumption finds an ally in chiplets. Just as a skilled athlete conserves energy with precise movements, chiplets can be customized for energy efficiency. This is particularly vital for mobile devices and other power-sensitive applications.

    Flexibility in Design: Think of Moore’s Law as a roadmap, and chiplets as customizable vehicles navigating it. Chiplets empower designers to create tailored devices by handpicking components, akin to choosing specific Lego bricks for a unique structure. This design flexibility ensures compatibility with the evolving landscape of Moore’s Law.

    The Synergy in Action: Real-World Instances

    In the context of Moore’s Law, consider the following real-world instances:

    AMD’s  Ryzen and Epyc CPUs:Chiplets play a pivotal role in enhancing these CPUs’ capabilities. Different chiplets, each containing distinct computing elements, are combined to form powerful processors. This approach complements Moore’s Law by maximizing performance within its bounds.

    IBM’s Power10 CPU: Chiplets shine here by optimizing the interplay of processing units, memory controllers, and I/O blocks. This intelligent orchestration harmonizes with Moore’s Law, achieving enhanced scalability and efficiency.

    NVIDIA’s Hopper GPU: Chiplets collaborate seamlessly to construct potent GPUs with various compute engines and memory controllers. This amalgamation supports Moore’s Law by ensuring high-performance computing while accommodating its trends.

    The Future: Chiplets and the Evolution of Moore’s Law

    In essence, chiplets are the skillful dancers that keep Moore’s Law’s rhythm steady. As the law faces mounting challenges, chiplets offer a means to sustain progress. They counteract diminishing returns from traditional monolithic chips by offering performance improvements, power efficiency, and design adaptability. While Moore’s Law has set the stage, chiplets step onto it, embracing the challenges and ushering in a new era of technological evolution.

    Conclusion

    The tapestry of technology is woven with innovation, and chiplets emerge as a testament to this ceaseless progress. Their symbiotic relationship with Moore’s Law paints a vivid picture of adaptability and advancement. As chiplets evolve and synergize with Moore’s Law, the world of computing stands poised to embark on an exhilarating journey, where customizability, performance, and energy efficiency harmonize to shape the future of electronics. Just as Lego pieces yield endless possibilities, chiplets unfurl a limitless canvas upon which the next chapter of computing history is written.

    Article link: https://techovedas.com/how-chiplets-can-change-the-future-by-extending-moores-law/

    Agile Rehab: Replacing Process Dogma with Engineering to Achieve True Agility – InfoQ

    Posted by timmreardon on 07/16/2024
    Posted in: Uncategorized.

    By

    • Bryan FinsterDistinguished Engineer @Defense Unicorns

    reviewed by

    • Ben LindersTrainer / Coach / Adviser / Author / Speaker @BenLinders.com

    Key Takeaways

    • We saw negative outcomes from agile scaling frameworks. Focusing on “Why can’t we deliver today’s work today?” forced us to find and fix the technical and process problems that prevented agility.
    • We couldn’t deliver better by only changing how development was done. We had to restructure the organization and the application.
    • The ability to deliver daily improved business delivery and team morale. It’s a more humane way to work.
    • Optimize for operations and always use your emergency process to deliver everything. This ensures hotfixes are safe while also driving improvement in feature delivery.
    • If you want to retain the improvement and the people who did it, measure it to show evidence when management changes, or you might lose it and the people.

    Struggling with your “agile transformation?” Is your scaling framework not providing the outcomes you hoped for? In this article, we’ll discuss how teams in a large enterprise replaced heavy agile processes with Conway’s Law and better engineering to migrate from quarterly to daily value delivery to the end users.

    Replacing Agile with Engineering

    We had a problem. After years of “Agile Transformation” followed by rolling out the Scaled Agile Framework, we were not delivering any better. In fact, we delivered less frequently with bigger failures than when we had no defined processes. We had to find a way to deliver better value to the business. SAFe, with all of the ceremonies and process overhead that comes with it, wasn’t getting that done. Our VP read The Phoenix Project, got inspired, and asked the senior engineers in the area to solve the problem. We became agile by making the engineering changes required to implement Continuous Delivery (CD).

    Initially, our lead time to deliver a new capability to the business averaged 12 months, from request to delivery. We had to fix that. The main problem, though, is that creating a PI plan for a quarter, executing it in sprints, and delivering whatever passes tests at the end of the quarter ignores the entire reason for agile product development: uncertainty.

    Here is the reality: the requirements are wrong, we will misunderstand them during implementation, or the end users’ needs will change before we deliver. One of those is always true. We need to mitigate that with smaller changes and faster feedback to more rapidly identify what’s wrong and change course. Sometimes, we may even decide to abandon the idea entirely. The only way we can do this is to become more efficient and reduce the delivery cost. That requires focusing on everything regarding how we deliver and engineering better ways of doing that.

    Why Continuous Delivery?

    We wanted the ability to deliver more frequently than 3-4 times per year. We believed that if we took the principles and practices described in Continuous Delivery by Jez Humble and Dave Farley seriously, we’d be able to improve our delivery cadence, possibly even push every code commit directly to production. That was an exciting idea to us as developers, especially considering the heavy process we wanted to replace.

    When we began, the minimum time to deliver a normal change was three days. It didn’t matter if it was a one-line change to modify a label or a month’s worth of work — the manual change control process required at least three days. In practice, it was much worse. Since the teams were organized into feature teams and the system was tightly coupled, the entire massive system had to be tested and released as a single delivery. So, today’s one-line change will be delivered, hopefully, in the next quarterly release unless you miss merge week.

    We knew if we could fix this, we could find out faster if we had quality problems, the problems would be smaller and easier to find, and we’d be able to add regression tests to the pipeline to prevent re-occurrence and move quickly to the next thing to deliver. When we got there, it was true. However, we got something more.

    We didn’t expect how much better it would be for the people doing the work. I didn’t expect it to change my entire outlook on the work. When you don’t see your work used, it’s joyless. When you can try something, deliver it, and get rapid feedback, it brings joy back to development, even more so when you’ve improved your test suite to the point where you don’t fear every keystroke. Getting into a CD workflow made me intolerant of working the way we were before. I feel process waste as pain. I won’t “test it later when we get time.” I won’t work that way ever again. Work shouldn’t suck.

    Descale and Decouple for Speed

    We knew we’d never be able to reach our goals without changing the system we were delivering. It was truly monstrous. It was the outcome of taking three related legacy systems and a fourth unrelated legacy system and merging them, with some splashy new user interfaces, into a bigger legacy system. A couple of years before this improvement effort, my manager asked how many lines of code the system was. Without comments, it was 25 million lines of executable code. Calling the architecture “spaghetti” would be a dire insult to pasta. Where there were web services, the service boundaries were defined by how big the service was. When it got “too big,” a new service, Service040, for example, would be created.

    We needed to break it up to make it easier to deliver and modernize the tech stack. Step one was using Domain Driven Design to start untangling the business capabilities in the current system. We aimed to define specific capabilities and assign each to a product team. We knew about Conway’s Law, so we decided that if we were going to get the system architecture we needed, we needed to organize the teams to mirror that architecture. Today, people call that the “reverse Conway maneuver.”  We didn’t know it had a name. I’ve heard people say it doesn’t work. They are wrong. We got the system architecture we wanted by starting with the team structure and assigning each a product sub-domain. The internal architecture of each team’s domain was up to them. However, they were also encouraged to use and taught how to design small services for the sub-domains of their product.

    We also wanted to ensure every team could deliver without the need to coordinate delivery with any other team. Part of that was how we defined the teams’ capabilities, but having the teams focus on Contract Driven Development (CDD) and evolutionary coding was critical. CDD is the process where teams with dependencies collaborate on API contract changes and then validate they can communicate with that new contract before they begin implementing the behaviors. This makes integration the first thing tested, usually within a few days of the discussion. Also important is how the changes are coded.

    The consumer needs to write their component in a way that allows their new feature to be tested and delivered with the provider’s new contract but not activated until that contract is ready to be consumed. The provider needs to make changes that do not break the existing contract. Working this way, the consumer or provider can deliver their changes in any order. When both are in place, the new feature is ready to release to the end user.

    By deliberately architecting product boundaries, the teams building each product, focusing on evolutionary coding techniques, and “contract first” delivery, we enabled each team to run as fast as possible. SAFe handles dependencies with release trains and PI planning meetings. We handled them with code. For example, if we had a feature that also required another team to implement a change, we could deploy our change and include logic that would activate our feature when their feature was delivered. We could do that either with a configuration change or, depending on the feature, simply have our code recognize the new properties in the contract were available and activate automatically.  

    Accelerating Forces Learning

    It took us about 18 months after forming the first pilot product teams to get the initial teams to daily delivery. I learned from doing CD in the real world that you are not agile without CD. How can you claim to be agile if it takes two or more weeks to validate an idea? You’re emotionally invested by then and have spent too much money to let the idea go.

    You cannot execute CD without continuous integration (CI). Because we took CI seriously, we needed to make sure that all of the tests required to validate a change were part of the commit for that change. We had to test during development. However, we were blocked by vague requirements. Focusing on CI pushed the team to understand the business needs and relentlessly remove uncertainty from acceptance criteria.

    On my team, we decided that if we needed to debate story points, it was too big and had too much uncertainty to test during development. If we could not agree that anyone on the team could complete something in two days or less, we decomposed the work until we agreed. By doing this, we had the clarity we needed to stop doing exploratory development and hoping that was what was being asked for. Because we were using Behavior Driven Development (BDD) to define the work, we also had a more direct path from requirement to acceptance tests. Then, we just had to code the tests and the feature and run them down the pipeline.

    You need to dig deep into quality engineering to be competent at CD. Since the CD pipeline should be the only method for determining if something meets our definition of “releasable,” a culture of continuous quality needs to be built. That means we are not simply creating unit tests. We are looking at every step, starting with product discovery, to find ways to validate the outcomes of that step. We are designing fast and efficient test suites. We are using techniques like BDD to validate that the requirements are clear. Testing becomes the job. Development flows from that.

    This also takes time for the team to learn, and the best thing to do is find people competent at discovery to help them design better tests. QA professionals who think, “What could go wrong?” and help teams create strategies to detect that, instead of the vast majority who are trained to write test automation, are gold. However, under no circumstances should QA be developing the tests because they become a constraint rather than a force multiplier. CD can’t work that way.

    The most important thing I learned was that it’s a more humane way of working. There’s less stress, more teamwork, less fear of breaking something, and much more certainty that we are probably building the right thing. CD is the tool for building high-performing teams.

    Optimize for Operations

    All pipelines should be designed for operations first. Life is uncertain — production breaks. We need the ability to fix things quickly without throwing gasoline on a dumpster fire. I carried a support pager for 20 years.  The one thing that was true for most of that time was that we always had some workaround process for delivering things in an emergency. This means that the handoffs we had for testing for normal changes were bypassed for an emergency. Then, we would be in a dumpster fire, hoping our bucket contained water and not gasoline.

    With CD, that’s not allowed. We have precisely one process to deliver any change: the pipeline. The pipeline should be deterministic and contain all the validations to certify that an artifact meets our definition of “releasable.” Since, as a principle, we never bypass or deactivate quality gates in the pipeline for emergencies, we must design good tests for all of our acceptance criteria and continue to refine them to be fast, efficient, and effective as we learn more about possible failure conditions. This ensures hotfixes are safe while also driving improvement in feature delivery. This takes time, and the best thing to do is to define all of the acceptance criteria and measure how long it takes to complete them all, even the manual steps. Then, use the cycle time of each manual process as a roadmap for what to automate next.

    What we did was focus on the duration of our pipeline and ensure we were testing everything required to deliver our product. We, the developers, took over all of the test automation. This took a lot of conversation with our Quality Engineering area since the pattern before was for them to write all the tests. However, we convinced them to let us try our way. The results proved that our way was better. We no longer had tests getting out of sync with the code, the tests ran faster, and they were far less likely to return false positives. We trusted our tests more every day, and they proved their value later on when the team was put under extreme stress by an expanding scope, shrinking timeline, “critical project.”

    Another critical design consideration was that we needed to validate that each component was deliverable without integrating the entire system. Using E2E tests for acceptance testing is a common but flawed practice. If we execute DDD correctly, then any need to do E2E for acceptance testing can be viewed as an architecture defect. They also harm our ability to address impacting incidents.

    For example, if done with live services, one of the tests we needed to run required creating a dummy purchase order in another system, flowing that PO through the upstream supply chain systems, processing that PO with the legacy system we were breaking apart, and then running our test. Each test run required around four hours. That’s a way to validate that our acceptance tests are valid occasionally, but not a good way to do acceptance testing, especially not during an emergency. Instead, we created a virtual service. That service could return a mock response when we sent a test header so we could validate we were integrating correctly. That test required milliseconds to execute rather than hours.

    We could run it every time we made a change to the trunk (multiple times per day) and have a high level of confidence that we didn’t break anything. That test also prevented a problem from becoming a bigger problem. My team ran our pipeline, and that test failed. The other team had accidentally broken their contract, and the test caught it within minutes of it happening and before that break could flow to production. Our focus on DDD resulted in faster, more reliable tests than any of the attempts at E2E testing that the testing area attempted. Because of that, CD made operations more robust.

    Engineering Trumps Scaling Frameworks

    We loved development again when we were able to get CD off the ground. Delivery is addictive, and the more frequently we can do that, the faster we learn. Relentlessly overcoming the problems that prevent frequent delivery also lowers process overhead and the cost of change. That, in turn, makes it economically more attractive to try new ideas and get feedback instead of just hoping your investment returns results. You don’t need SAFe’s PI planning when you have product roadmaps and teams that handle dependencies with code. PI plans are static.

    Roadmaps adjust from what we learn after delivering. Spending two days planning how to manage dependencies with the process and keeping teams in lock-step means every team delivers at the pace of the slowest team. If we decouple and descale, teams are unchained. Costs decrease. Feedback loops accelerate. People are happier. All of these are better for the bottom line.

    On the first team where we implemented CD, we improved our delivery cadence from monthly (or less) to several times per day. We had removed so much friction from the process that we could get ideas from the users, decompose them, develop them, and deliver them within 48 hours. Smaller tweaks could take less than a couple of hours from when we received the idea from the field. That feedback loop raised the quality and enjoyment level for us and our end users.

    Measure the Flow!

    Metrics is a deep topic and one I talk about frequently. One big mistake we made was not measuring the impact of our improvements on the business. When management changed, we didn’t have a way to show what we were doing was better. To be frank, the new management had other ideas – poorly educated ideas. Things degraded, and the best people left. Since then, I’ve become a bit obsessed with measuring things correctly.

    For a team wanting to get closer to CD, focus on a few things first:

    1. How frequently are we integrating code into the trunk? For CI, this should be at least once per day per team member on average. Anything less is not CI. CI is a forcing function for learning to break changes into small, deliverable pieces.
    2. How long does it take for us, as a team, to deliver a story? We want this to be two days maximum. Tracking this and keeping it small forces us to get into the details and reduce uncertainty. It also makes it easy for us to forecast delivery and identify when something is trending later than planned. Keep things small.
    3. How long does it take for a change to reach the end of our pipeline? Pipeline cycle time is a critical quality feedback loop and needs to keep improving.
    4. How many defects are reported week to week? It doesn’t matter if they are implementation errors, “I didn’t expect it to work this way,” or “I don’t like this color.” Treat them all the same. They all indicate some failure in our quality process. Quality starts with the idea, not with coding.

    Since this journey, I’ve become passionate about continuous delivery as a forcing function for quality. I’ve seen on multiple teams in multiple organizations what a positive impact it has on everything about the work and the outcomes. As a community, we have also seen many organizations not take a holistic approach, throw tools at the problem, ignore the fact that this is a quality initiative, and hurt themselves. It’s important that people understand the principles and recommended practices before diving in head first.

    You won’t be agile by focusing on agile frameworks. Agility requires changing everything we do, beginning with engineering our systems for lower delivery friction. By asking ourselves, “Why can’t we deliver today’s work today?” and then relentlessly solving those problems, we improve everything about how we work as an organization. Deploy more and sleep better.

    Article link: https://www.infoq.com/articles/replace-process-dogma-engineering/

    Posts navigation

    ← Older Entries
    Newer Entries →
    • Search site

    • Follow healthcarereimagined on WordPress.com
    • Recent Posts

      • WHAT A QUBIT IS AND WHAT IT IS NOT. 01/25/2026
      • Governance Before Crisis We still have time to get this right. 01/21/2026
      • On the Eve of Davos: We’re Just Arguing About the Wrong Thing 01/18/2026
      • Are AI Companies Actually Ready to Play God? – RAND 01/17/2026
      • ChatGPT Health Is a Terrible Idea 01/09/2026
      • Choose the human path for AI – MIT Sloan 01/09/2026
      • Why AI predictions are so hard – MIT Technology Review 01/07/2026
      • Will AI make us crazy? – Bulletin of the Atomic Scientists 01/04/2026
      • Decisions about AI will last decades. Researchers need better frameworks – Bulletin of the Atomic Scientists 12/29/2025
      • Quantum computing reality check: What business needs to know now – MIT Sloan 12/29/2025
    • Categories

      • Accountable Care Organizations
      • ACOs
      • AHRQ
      • American Board of Internal Medicine
      • Big Data
      • Blue Button
      • Board Certification
      • Cancer Treatment
      • Data Science
      • Digital Services Playbook
      • DoD
      • EHR Interoperability
      • EHR Usability
      • Emergency Medicine
      • FDA
      • FDASIA
      • GAO Reports
      • Genetic Data
      • Genetic Research
      • Genomic Data
      • Global Standards
      • Health Care Costs
      • Health Care Economics
      • Health IT adoption
      • Health Outcomes
      • Healthcare Delivery
      • Healthcare Informatics
      • Healthcare Outcomes
      • Healthcare Security
      • Helathcare Delivery
      • HHS
      • HIPAA
      • ICD-10
      • Innovation
      • Integrated Electronic Health Records
      • IT Acquisition
      • JASONS
      • Lab Report Access
      • Military Health System Reform
      • Mobile Health
      • Mobile Healthcare
      • National Health IT System
      • NSF
      • ONC Reports to Congress
      • Oncology
      • Open Data
      • Patient Centered Medical Home
      • Patient Portals
      • PCMH
      • Precision Medicine
      • Primary Care
      • Public Health
      • Quadruple Aim
      • Quality Measures
      • Rehab Medicine
      • TechFAR Handbook
      • Triple Aim
      • U.S. Air Force Medicine
      • U.S. Army
      • U.S. Army Medicine
      • U.S. Navy Medicine
      • U.S. Surgeon General
      • Uncategorized
      • Value-based Care
      • Veterans Affairs
      • Warrior Transistion Units
      • XPRIZE
    • Archives

      • January 2026 (8)
      • December 2025 (11)
      • November 2025 (9)
      • October 2025 (10)
      • September 2025 (4)
      • August 2025 (7)
      • July 2025 (2)
      • June 2025 (9)
      • May 2025 (4)
      • April 2025 (11)
      • March 2025 (11)
      • February 2025 (10)
      • January 2025 (12)
      • December 2024 (12)
      • November 2024 (7)
      • October 2024 (5)
      • September 2024 (9)
      • August 2024 (10)
      • July 2024 (13)
      • June 2024 (18)
      • May 2024 (10)
      • April 2024 (19)
      • March 2024 (35)
      • February 2024 (23)
      • January 2024 (16)
      • December 2023 (22)
      • November 2023 (38)
      • October 2023 (24)
      • September 2023 (24)
      • August 2023 (34)
      • July 2023 (33)
      • June 2023 (30)
      • May 2023 (35)
      • April 2023 (30)
      • March 2023 (30)
      • February 2023 (15)
      • January 2023 (17)
      • December 2022 (10)
      • November 2022 (7)
      • October 2022 (22)
      • September 2022 (16)
      • August 2022 (33)
      • July 2022 (28)
      • June 2022 (42)
      • May 2022 (53)
      • April 2022 (35)
      • March 2022 (37)
      • February 2022 (21)
      • January 2022 (28)
      • December 2021 (23)
      • November 2021 (12)
      • October 2021 (10)
      • September 2021 (4)
      • August 2021 (4)
      • July 2021 (4)
      • May 2021 (3)
      • April 2021 (1)
      • March 2021 (2)
      • February 2021 (1)
      • January 2021 (4)
      • December 2020 (7)
      • November 2020 (2)
      • October 2020 (4)
      • September 2020 (7)
      • August 2020 (11)
      • July 2020 (3)
      • June 2020 (5)
      • April 2020 (3)
      • March 2020 (1)
      • February 2020 (1)
      • January 2020 (2)
      • December 2019 (2)
      • November 2019 (1)
      • September 2019 (4)
      • August 2019 (3)
      • July 2019 (5)
      • June 2019 (10)
      • May 2019 (8)
      • April 2019 (6)
      • March 2019 (7)
      • February 2019 (17)
      • January 2019 (14)
      • December 2018 (10)
      • November 2018 (20)
      • October 2018 (14)
      • September 2018 (27)
      • August 2018 (19)
      • July 2018 (16)
      • June 2018 (18)
      • May 2018 (28)
      • April 2018 (3)
      • March 2018 (11)
      • February 2018 (5)
      • January 2018 (10)
      • December 2017 (20)
      • November 2017 (30)
      • October 2017 (33)
      • September 2017 (11)
      • August 2017 (13)
      • July 2017 (9)
      • June 2017 (8)
      • May 2017 (9)
      • April 2017 (4)
      • March 2017 (12)
      • December 2016 (3)
      • September 2016 (4)
      • August 2016 (1)
      • July 2016 (7)
      • June 2016 (7)
      • April 2016 (4)
      • March 2016 (7)
      • February 2016 (1)
      • January 2016 (3)
      • November 2015 (3)
      • October 2015 (2)
      • September 2015 (9)
      • August 2015 (6)
      • June 2015 (5)
      • May 2015 (6)
      • April 2015 (3)
      • March 2015 (16)
      • February 2015 (10)
      • January 2015 (16)
      • December 2014 (9)
      • November 2014 (7)
      • October 2014 (21)
      • September 2014 (8)
      • August 2014 (9)
      • July 2014 (7)
      • June 2014 (5)
      • May 2014 (8)
      • April 2014 (19)
      • March 2014 (8)
      • February 2014 (9)
      • January 2014 (31)
      • December 2013 (23)
      • November 2013 (48)
      • October 2013 (25)
    • Tags

      Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
    • Upcoming Events

    Blog at WordPress.com.
    healthcarereimagined
    Blog at WordPress.com.
    • Subscribe Subscribed
      • healthcarereimagined
      • Join 153 other subscribers
      • Already have a WordPress.com account? Log in now.
      • healthcarereimagined
      • Subscribe Subscribed
      • Sign up
      • Log in
      • Report this content
      • View site in Reader
      • Manage subscriptions
      • Collapse this bar
     

    Loading Comments...