Skip to content

Open Source

4 posts with the tag “Open Source”

Revitalizing Desktop UX: Why Linux Must Lead the Next Evolution

Desktop user interfaces have remained remarkably static for decades. From the Macintosh Finder’s clever middle-ellipsis filename truncation—a subtle tweak from the early 1980s still in use today—to the nuanced drag-and-drop mechanics that enable seamless file handling across windows, the core paradigms feel frozen in time. At the recent Ubuntu Summit 25.10, a veteran UX designer with roots at Apple and Google delivered a compelling wake-up call: are we doomed to the same desktop experience forever?

The speaker, drawing from four decades in the field, highlighted how Linux desktops inherited proven patterns from Mac and Windows. This wasn’t laziness; it was smart iteration. As Steve Jobs once quipped, echoing Picasso, “Good artists copy, great artists steal.” Early Linux environments creatively adapted these foundations, even influencing back with features like virtual desktops. But now, with proprietary giants stalled, open source has an opportunity—and arguably a responsibility—to pioneer anew.

Apple’s 2017 pivot to iPad as the “post-PC” future flopped. That infamous “What’s a Computer?” ad twisted the knife, positioning the Mac as obsolete, yet iPadOS’s forced window-manager choices and touch-first design never conquered productivity workflows. Shiny effects like “liquid glass” can’t mask the lack of substance.

Microsoft fares little better. Aggressive OneDrive prompts, Edge shilling, and the botched Recall feature (great idea, poor execution) erode trust. The speaker shared a personal anecdote: interviewing for Windows UX lead eight years ago, pitching radical changes, only to be politely rebuffed. “We dodged a bullet,” they noted, praising niche Windows experiments but lamenting mainstream inertia.

Linux enthusiasts often dismiss desktop refinements—“I use the CLI anyway”—but this misses the point. Robust desktop UX unlocks broader usability, enabling drags into apps, clipboard fluidity, and data flows that power non-technical users. Stagnation here stifles adoption.

Common Pushback and a Framework for the Future

Section titled “Common Pushback and a Framework for the Future”

Critics retort: “Desktop is for boomers,” “It’s a standard; don’t break it,” or “Users hate change.” All partially true, but flawed. Mobile dominates consumers, not enterprise CAD or codebases. Standards evolve—BlackBerry yielded to iPhone—and users adapted to cars, PCs, and smartphones despite initial resistance.

Enter the “Could, Should, Might, Don’t” mindset from Could, Should, Might: Thinking About the Future. “Could” sparks wild ideas (AI fever dreams); “Should” sets metrics (ethics, business); “Might” maps scenarios; “Don’t” defines boundaries (no data collection). Avoid their shadows: foolhardy visions, short-term preaching, unfocused fear, rigid gatekeeping. Open source thrives by drafting behind proven ideas, but with sources dry, it’s time to lead.

UX Beyond Pixels: Bridging Programmers and Designers

Section titled “UX Beyond Pixels: Bridging Programmers and Designers”

Misnomer “UX/UI” conflates deep research—user studies, personas, tech mapping, flows—with superficial visuals (icons last!). Programmers probe every edge case (“might”); designers prioritize user stats (“should”). Tension arises: “That’s just your opinion.” Solution? Shared perspective via research, like Mastodon’s quote-post tweak, informed by Twitter studies and marginalized voices, flipping “reduce harm” to “enable good.”

Raph Koster’s Theory of Fun offers “learning loops”: intent → affordance → feedback → refined model. Super Mario masters one jump button across move, climb, attack via progressive discovery. Nintendo invests 80% here.

Desktop text selection exemplifies: click → drag-select → double-click word. Mobile botched this naive “tap=click” copy, yielding four tap outcomes (cursor, select, menu, scroll). Research fixed it: force-press + magnifier + gesture menus slashed edits from five taps to one fluid motion.

A toy demo illustrated: a hypothetical mouse “super” button (or key) for windows—click to close, drag to resize/reposition, deeper press for clipboard/file ops. Crossing WM, editor, and file manager boundaries with layered gestures. Subtle, consistent, powerful.

Ditch grand AI visions or far-out physical UIs like Dynamic Land. Focus modest growth between CLI and radical futures.

  1. Easy: KDE Connect 2.0 – Polish phone-desktop sync (Continuity-like). Prioritize Android SDK depth, consumer UX over programmer defaults. Bluetooth handoff for reliability?

  2. Medium: Super Windowing – Wayland-ready system weaving files, history, apps. User-research first: pains in versioning, flows. Prototype fast, iterate.

  3. Hard: Local Recall – Ethical, on-device LLM for history/clipboard smarts. Ultimate right-click? Gesture predictions? APIs needed, but experiments viable.

Fund like Ink & Switch: 1-3 person teams, 3 months build + 1 month paper. CRDTs emerged this way, spawning research ecosystems on shoestring budgets.

“When you’re finished changing, you’re finished,” warns Benjamin Franklin (via Brad Frost). Allocate “float” time—even 0.5%—beyond 70% maintenance/20% increments for blue-sky UX. Hardware leaps (100M× faster CPUs since 1984 Mac) demand software ambition. Canonical’s polish work is vital, but foundational shifts beckon.

Linux desktops aren’t relics; they’re poised for renaissance. Prototype, reflect, share. Color outside the lines—be Princess Leia, blast the hole, jump in. The future desktop awaits.

However, we must temper this “blue sky” ambition with a hard look at the “Graveyard of Ambition.” Why did Ubuntu’s Unity or GNOME 3.0 face such fierce backlash? Because for enterprise users, muscle memory is money. Radical change often breaks workflows. The challenge for Linux isn’t just to innovate, but to innovate without alienating the “Boomers” who keep the lights on. The next evolution must be a bridge, not a cliff—a lesson Microsoft learned the hard way with Windows 8.

Linux Foundation Establishes Agentic AI Foundation, Anchored by Anthropic's MCP Donation

In a significant step for open-source AI infrastructure, the Linux Foundation has announced the formation of the Agentic AI Foundation (AIF), a new neutral governance body dedicated to developing standards and tools for AI agents. Leading the charge is Anthropic’s donation of the Model Context Protocol (MCP), a rapidly adopted open standard that enables AI models and agents to seamlessly connect with external tools, APIs, and local systems.

The Rise of MCP: A Protocol for AI Integration

Section titled “The Rise of MCP: A Protocol for AI Integration”

Born as an open-source project within Anthropic, MCP quickly gained traction due to its community-driven design. It standardizes communication between AI agents and the outside world—think sending messages, querying databases, adjusting IDE settings, or interacting with developer tools. Major platforms have already embraced it:

  • ChatGPT
  • Cursor
  • Gemini
  • Copilot
  • VS Code

Contributions from companies like GitHub and Microsoft further accelerated its growth, making MCP one of the fastest-evolving standards in AI. Previously under Anthropic’s stewardship, its transfer to AIF ensures broader, vendor-neutral governance.

Agentic AI Foundation: Core Projects and Mission

Section titled “Agentic AI Foundation: Core Projects and Mission”

Hosted by the Linux Foundation—a nonprofit powerhouse managing over 900 open-source projects, including the Linux kernel, PyTorch, and RISC-V—the AIF aims to foster transparent collaboration on agentic AI. Alongside MCP, the foundation incorporates:

  • Goose: A local-first, open-source agent framework leveraging MCP for reliable, structured workflows.
  • Agents.md: A universal Markdown standard adopted by tens of thousands of projects, providing consistent instructions for AI coding agents across repositories and toolchains.

The AIF’s goal is clear: create a shared, open home for agentic infrastructure, preventing proprietary lock-in and promoting stability as AI agents integrate into everyday applications.

Handing MCP to the Linux Foundation neutralizes perceptions of single-vendor control, encouraging multi-company adoption and long-term stability. Founding Platinum members—each paying $350,000 annually for board seats, voting rights, and strategic influence—include:

Platinum MemberNotable Quote
AWS”Excited to see the Linux Foundation establish the Agentic AI Foundation.”
Anthropic(Donor of MCP)
Block-
Bloomberg”MCP is a foundational building block for APIs in the era of agentic AI.”
Cloudflare”Open standards like MCP are essential to enabling a thriving developer ecosystem.”
Google Cloud”New technology gets widely adopted through shared standards.”
Microsoft”For a gentic future to become reality, we have to build together and in the open.”
OpenAI-

These tech giants gain priority visibility, committee access, and leadership summit invitations, signaling strong industry commitment despite ongoing debates over their proprietary models.

While ironic—given these firms’ closed-source frontier models—this move counters AI fragmentation. By aligning on protocols like MCP under Linux Foundation oversight, developers benefit from interoperability without vendor lock-in. As agentic AI proliferates, AIF positions open source as a stabilizing force, much like Linux has for operating systems.

This development marks a win for collaborative innovation, ensuring AI tools evolve transparently. Time will tell if it delivers on neutrality, but the foundation is set for agentic AI to scale responsibly.

However, the platinum roster reads like a Who’s Who of Big Tech—AWS, Microsoft, Google—raising the specter of “corporate capture.” While the Linux Foundation has successfully herded cats before, there’s a risk that this body becomes less about “open source” in the Stallman sense and more about creating an interoperability layer for proprietary giants. If “open” standards simply make it easier to link closed-source models like Claude and GPT, does the open ecosystem actually win? The challenge for AIF will be proving it’s more than just a lobbying arm for the oligopoly, ensuring that independent developers aren’t just consumers of these standards, but architects of them.

GitHub Actions' "Deranged" Sleep Loop: Years of Bugs Costing Developers Thousands

GitHub Actions, the ubiquitous CI/CD platform powering workflows for millions of repositories, harbors a notorious four-line Bash function that’s been lambasted as “utterly deranged” by programming luminaries. This sleep mechanism, meant to pause execution briefly, has instead spawned infinite loops, zombie processes, and runaway bills—issues persisting for nearly a decade despite fixes sitting unmerged.

The saga begins around 2016 with the initial public commits to the GitHub Actions runner codebase. Early versions reveal Windows developers grappling with Bash scripting, resorting to a Stack Overflow hack from 16 years prior: using ping to simulate a 5-second delay when sleep wasn’t available.

Terminal window
if [ $? -eq 4 ]; then
sleep 5 || ping -n 6 127.0.0.1 > nul || (for i in `seq 1 5000`; do echo >&5; done)
fi

This fallback chain—sleep first, then ping -n 6 (yielding ~4 seconds), or worst-case, echoing to /dev/null 5,000 times—earned promotion to a top-level safe_sleep function. It was crude but mostly functional, if CPU-intensive.

By 2022, evolution took a darker turn. The code “improved” into this gem, which lingered for years:

Terminal window
start=$SECONDS
while [ $((SECONDS - start)) -ne ${1?} ]; do :; done

At first glance, it leverages Bash’s SECONDS variable (incrementing every second) for a precise wait. Pass 5, and it loops until exactly 5 seconds elapse. But here’s the fatal flaw: on busy CI runners juggling heavy jobs, loop iterations might skip seconds due to scheduling delays. If SECONDS - start jumps from 4 to 6, -ne 5 never falsifies, trapping the process in an eternal spin.

Worse, with no sleep inside the loop, it pegs a full CPU core at 100%—half the compute on GitHub’s standard 2-vCPU runners—starving other tasks and cascading failures across queues.

Real-World Carnage: $2,400 Zombie Processes and CI Meltdowns

Section titled “Real-World Carnage: $2,400 Zombie Processes and CI Meltdowns”

The fallout was brutal. One developer reported a single runner idling for 5,135 hours, billed at GitHub’s $0.08 per vCPU-minute rate: ~$2,400 vaporized. Projects like Zigg abandoned GitHub entirely for Codeberg, citing “inexcusable bugs” and “vibe scheduling” post-Microsoft’s AI pivot—random job prioritization exacerbating backlogs where even main branch commits stalled.

A simple fix emerged in 2024: tweak to while [ $((SECONDS - start)) -le ${1?} ]; do :; done. This embraces overshoot, halting reliably. Proposed in February 2022, it languished, auto-closed after a month, then merged 1.5 years later amid outcry—yet other bugs persist, like recent file-hashing failures from a botched refactor.

That refactor? A JavaScript snippet trading a clean Object.getOwnPropertyNames for nested loops and redundant if statements, ballooning complexity and introducing regressions.

// Pre-refactor simplicity vs. post-refactor horror
function getKeys(obj) {
return Object.getOwnPropertyNames(obj);
}
// Now: convoluted for-loops, duplicated code, function-returning-functions

GitHub Actions underpins a multi-billion-dollar Microsoft ecosystem, yet these gremlins fester. PRs ignored, refactors worsening code, neglect amid scale: it’s a stark reminder that even giants stumble on fundamentals. Proponents argue it’s “elegant Bash”—short, using : as a no-op—but critics like Matt Lad, of Antithesis, nail it: peak engineering this is not.

Recent AI hype restores ironic faith; no LLM could conjure such creatively catastrophic logic. Developers pay premium for reliability—GitHub must prioritize core stability over shine. Until then, audit your runners: that “harmless” sleep might be your budget’s silent killer.

But why would they fix it? This isn’t just “neglect”; it’s a symptom of monopoly lethargy. When you own the ecosystem—when millions of workflows are locked into your syntax—performance bugs that burn customer CPU cycles are, perversely, revenue generators. Every wasted minute of a zombie runner is a minute billed. In a competitive market, this would be a death sentence. In GitHub’s world, it’s a rounding error. The real fix isn’t just a patch; it’s viable competition that forces the giant to care about the ants.

Atom The IDE That Accidentally Built its Own Killer

On June 25, 2015, Chris Wanstrath celebrated Atom 1.0’s stable release—a free, open-source code editor built on web technologies that promised to democratize development. What started as a passion project in 2007, sparked by a chance meeting at a Ruby meetup where Wanstrath encountered Tom Preston-Werner demoing an early GitHub prototype called Grit, evolved into a hackable editor inspired by Emacs but powered by HTML, CSS, and JavaScript.

Atom’s journey wasn’t smooth. Shelved amid GitHub duties, it revived in 2011 using the ACE editor in a WebView, then pivoted to Chromium Embedded Framework and Node.js via Node-WebKit in 2012. This fusion birthed “Atom Shell”—a tool so potent it was rebranded Electron in 2015, decoupling it from Atom to fuel cross-platform desktop apps.

Electron’s appeal was immediate: leverage familiar web stacks for native-like apps, sidestep C++ hurdles of frameworks like Qt, enable rapid iterations, and reuse web codebases. Developers flocked to it for projects beyond editing, and giants followed—Slack, Discord, Microsoft Teams all run on Electron today, powering billions of interactions.

Atom’s 2014 beta exploded in popularity amid a surge in new programmers, its lightweight design and package ecosystem outshining bloated incumbents like Visual Studio. Backlash over its initial closed-source status echoed Wanstrath’s open-source advocacy, but relicensing under MIT quelled critics, growing its user base to over 1.1 million.

Yet Electron’s bloat—bundling full Chromium and Node.js per app—haunted performance. Atom took seconds to open small files, guzzling 400MB RAM. Enter Microsoft’s Visual Studio Code (VS Code) in 2015, built on Monaco (evolved from their browser editor) and Electron. Skeptics dismissed it, but optimizations shone: isolated extension processes, pre-optimized Monaco, binary encoding. VS Code launched 4x faster, went fully open-source, and birthed a thriving marketplace.

The 2018 Microsoft-GitHub acquisition for $7.5B raised alarms. New CEO Nat Friedman promised dual support on Reddit, but reality diverged—VS Code iterated monthly while Atom stagnated, commits plummeting 76% in six months. By 2022, Atom sunsetted, repositories archived by late 2022, with Microsoft pivoting to cloud tools like GitHub Codespaces. Community forks like Pulsar emerged, but Atom’s era ended.

Atom’s legacy? Electron endures. VS Code dominates 2025 surveys—15-54% market share per PYPL and Stack Overflow data—holding off AI challengers despite Cursor’s 18% adoption and Zed’s buzz. Cursor, a VS Code fork with AI-native features like Composer and Visual Editor, hit $500M ARR by mid-2025, serving half the Fortune 500 with real-time collaboration and agent workflows. Zed, Rust-powered by ex-Atom contributors, gained Windows support in October 2025, previewing Dev Containers and AI commits, amassing momentum.

Electron powers it all: VS Code’s blinking cursors carry Atom’s ghost. In open source, death spawns successors—Zed’s 70K+ stars, Cursor’s explosive growth. As 2025 Stack Overflow data shows developers craving AI tools atop proven editors, Atom didn’t lose; its killer became the industry standard, forked eternally.

But there’s a deeper story here than just software genealogy. Zed isn’t just a spiritual successor; it’s a personal redemption. Nathan Sobo, the original creator of Atom, is also the architect behind Zed. For him, the pivot from Electron (which he helped pioneer) to Rust wasn’t just a technical decision—it was a correction of his own legacy’s greatest flaw: performance. In an industry obsessed with “new,” there’s profound poetry in a creator returning to fix what he broke, proving that open source isn’t just about codebases, but about the people who learn, fail, and build again.