Skip to main content
  1. Blog
  2. Article

Canonical
on 17 January 2014

Instant eCommerce


Christmas shopping habbits have changed recently. Most people just buy gifts online to avoid endless queues. A lot of small and medium companies are looking at entering the eCommerce space. However at the moment there are three choices:

  1. You choose a hosted solution with direct lock-in.
  2. You choose an expensive commercial option.
  3. You build a solution yourself out of open source components.

Neither of them is ideal.

What if you could instantly deploy a complete eCommerce solution build out of best-in-class open source components? A shop, a back-office ERP/CRM, web analytics, business intelligence, monitoring, etc. All integrated but with the freedom to customize to your needs.

This is exactly what the Instant eCommerce Juju Lab wants to achieve.

As always a Juju Lab succeeds if there is active participation and fails if there is not, so go and check it out!

Related posts


Johann Wolf
27 April 2026

Why Web Engineering is great

Ubuntu Article

Like many software engineers, one of my first software development experiences started with creating my own web page. Since that time 20+ years ago, a lot has changed in the web landscape. Having worked a lot in web since then, I’d like to take a moment to reflect on what I think makes web great! ...


Ishani Ghoshal
27 April 2026

Ubuntu 16.04 LTS has reached the end of standard Expanded Security Maintenance with Ubuntu Pro. Here are your options.

Ubuntu Article

Ubuntu 16.04 LTS (Xenial Xerus) reached the end of its five-year Expanded Security Maintenance (ESM) window in April 2026. If you are still running 16.04, it is critical to address your support status to ensure continued security and compliance. Your support options Now that 16.04 is in its Legacy phase, you have two primary paths: ...


Rob Gibbon
27 April 2026

Understanding disaggregated GenAI model serving with llm-d

AI Article

What is llm-d? llm-d is an open source solution for managing high-scale, high-performance Large Language Model (LLM) deployments. LLMs are at the heart of generative AI – so when you chat with ChatGPT or Gemini, you’re talking to an LLM. Simple LLM deployments – where an LLM is deployed to a single server – can ...