Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Docker Expands Relationship with Microsoft (techcrunch.com) similar stories update story
153.0 points by fortran77 | karma 37359 | avg karma 3.81 2020-05-28 14:45:24+00:00 | hide | past | favorite | 147 comments



view as:

What happened to docker? They basically democratized this whole container thing and launched a paradigm shift in deployment. But you never hear from them anymore except when mentioning the command line container launcher.

Or am I missing something here? Is there an interesting business story there of the missteps made?

Just to clarify, I run docker every day (docker for mac)... my question is more how much of the industry revenue they are able to capture as a business. I wouldn't ask this if they were just an open source project with no business ambitions.


They got their business solution (docker-swarm) eaten by kubernetes. So now they're only really the CLI and the public registry.

On Linux they aren't even the CLI anymore. Buildah and podman are much better

I've struggled with podman, esp when following tutorials for docker when podman doesn't quite replicate. Is it worth persisting?

Yes, it's definitely worth persisting, especially if you use K8s. Podman can import/export Pod objects with the same API as K8s, so it makes it trivial to run the same k8s yaml locally and in a deployed environment.

Running a light weight Kubernetes setup like k3s locally is a better solution for that use case, imo.

k3s is something I've heard about but know very little. Does it use a VM? I tried skimming their website but couldn't parse that out.

No VM, it is just a light weight distribution of Kubernetes that can be effortlessly installed and run on Linux.

I personally feel that any pain you might face with podman is worth it just for the ability to treat container applications just like any other application on your system

https://www.redhat.com/sysadmin/podman-shareable-systemd-ser...


so daemonless (tbh I'm not even sure why is this a good thing) and better security through rootless aside, what are some strong points to switch over?

Rootless is really the big selling point for me. Means I can just let gitlab CI log in as a specific user to deploy my personal apps, without someone compromising gitlab/my gitlab account giving them root on my personal servers.

The other advantage is that it can setup containers based on kubernetes xml file or export a local setup to a kube xml file. This is because Podman is really aimed at the small scale setups - if you would have used docker-compose with docker, you might consider Podman, but the export lets you prototype with something convenient locally then export a config when you're happy to test on the big complicated tooling.


How are they "much better"? Docker works extremely well. I build and run Docker containers all over the place and they Just Work.

They're newer, and fewer people have heard of them.

Not running as root, for one. There are also many other advantages, such as decoupling the building from the running, and running rootfs tarballs directly, blah blah blah.

I had the exact same question, and for podman one of the selling point is that it's a drop in replacement and you don't need root privilege : https://opensource.com/article/19/2/how-does-rootless-podman...

"Docker" is just the userspace component of a Linux kernel feature. "Docker" doesn't run anything, the kernel does.

Podman has three main advantages over docker: Not needing to run as root, not requiring a daemon in the background and being packaged directly by linux distros.


1. systemd runs as root, doesn't bother me that docker daemon does as well, since they do similar things (for me). Only users with access to the socket file have access.

2. I like the idea of not having a daemon but never actually had a problem with this in practice. The daemon has never crashed on me. systemd also has daemons that have also never crashed on me.

3. It's like 3 lines to install the official docker package. This is a non-issue for me.

Those do not sound like very meaningful advantages. Certainly not significant enough for me to want to switch from something that Just Works.

Thanks for the reply though. I'll be sticking with Docker.


The actual advantage of Podman and Buildah is that it is not Docker. Because of the tribal nature of tech communities, that is a desirable property for members of the "anti-Docker" tribe. Everything else is an exercise in retroactively justifying a subjective decision with seemingly objective criteria.

If you don't understand why others are so excited about those tools, it simply means that you're not part of their tribe.


Interesting! I now refuse to allow the docker daemon anywhere except a VM on my machine as it does some really stupid things, runs as root etc.

Are there any downsides to podman that you know about?


Not so much the distros part, it's not in the latest Ubuntu 20.04 LTS unlike Docker. It's landed in the development branch of Debian though so maybe someday.

(Granted Docker is only in Ubuntu's "universe" section and not as a supported package that would receive security patches etc)


It is in Fedora and Arch. Ubuntu generally doesn't really follow any technologies anyone other than Canonical pushes out.

Podman when not run as root has some significant drawbacks (e.g. containers can't communicate with each other). That's not specific to podman it's just hard to do without root.

Podman has long running processes as well, there's a podman process that'll run once you've launched at least one containner, and a conmon for each container (equivalent to containerd-shim)

Packaged directly... it is by RH and SUSE, don't think by debian/ubuntu. At least for ubuntu, 20.04 packages Docker 19.03 just fine.


Containers within the same pod can certainly communicate with each other without root? I'm running that setup right now for my graylog container and it's mongo and elastic search dependencies

Within the same pod sure, they share the same netns. I was talking about individual container comms.

With rootless podman they use slirp4netns and all get the same IP, with rootful podman or Docker a bridge network is established so that containers that aren't in the same pod can communicate with each other.


    yrro@host$ podman run --rm -it debian:unstable bash -x -c 'id; cat /proc/self/uid_map'
    + id
    uid=0(root) gid=0(root) groups=0(root)
    + cat /proc/self/uid_map
             0  876099160          1
             1     231073      65536
This is done as a regular user with special rights on the system; all that is required are entries for yrro within /etc/subuid and /etc/subgid. There's no equivalent of Docker's daemon that hands out root on the machine to anyone who can connect to its socket.

Cut out the middle man. They don’t require the Docker daemon for one, which isn’t necessary. The client/server socket interface is the wrong model, the posix userland and filesystem interfaces to the kernel are a much better fit.

The downside to podman is that you lose docker-compose. (Yes I know there is podman-compose, but it isn't as complete.)

docker-compose has issues of its own, but when you can use it then it works really well.


At least for my use case, I haven't really noticed anything missing from podman-compose when I dumped docker-compose for it.

I think ideally they want you to use kube XML for this use case, though I haven't because it doesn't let me associate a named volume with a set of containers. The suggestion is to use bind mounts to the host instead, which feels like an issue for container portability. Podman Compose does not have this limitation


I tried to use the Podman and Podman-compose stack for a large develop docker environment I run locally.

The individual containers build alright, but running the compose with Podman-Compose failed miserably with unhelpful output - just a bunch of Python stack traces. Running it with Docker-Compose just worked.

I think that Docker-Compose is just a lot more lenient with Yaml type errors - where things should and shouldn't be surrounded in quotes. Podman-Compose just needs to be a little more forgiving, and continuing to parse the compose files if the semantics can be assumed, even if the syntax is not perfect.


I keep hearing this meme that Kubernetes made it hard for Docker to monetize Swarm. But as far as I can tell, Swarm is free and open-source. Docker has never actually charged money for it. What am I missing?

They were early in doing decent userland tooling around cgroups etc. Docker Hub too was a genius idea.

However their engineering went to crap - tonnes of issues with new versions where networking and other stuff broke, plus the whole "runs as root" thing. People only then realised that Docker wasn't the tool, it was the wrapper around it.

All this time they were trying to monetise it and failed pretty much.

This is the view as an outsider who's used it for years - anybody got a more accurate view?


The short answer is that selling developer tools is hard. Software engineers especially do not like to pay for stuff. We used to be in the DevOps space [0] and it is something that requires heavy enterprise sales to monetise well. There are few companies that truly took off in selling digital shovels, one example is probably Jetbrains. Most developer tooling/platform companies either died or got acquihired once they run out of funding. Flynn and Deis were both acquired and I suspect even the new generation of PaaS like Vercel would have to look for alternate sources of revenue once the VC funding dries up and if they do not achieve profitability. The container space is extremely challenging if you are competing directly with companies like Google with tremendous amount of cash and a willingness to buy developer mindshare. Docker may have popularised containers and singlehandedly brought the ideas of BSD jails mainstream but profiting from it is another thing entirely (you can do pretty well consulting on Kubernetes etc. but that would hardly give unicorn levels of return).

[0] https://news.ycombinator.com/item?id=23254394


The eternal question of how much you can capture of the value you create.

Which microsoft understood well: those dev tools are popular, but can't make money. Well, they have money and they need to become popular again.

Deal.


Why is JetBrains a rare success compared to others?

JetBrains seems to do a good job sticking with the core audience of in-the-trenches coders, instead of trying to expand in a way that feels corporatey and enterprisey.

I suppose it's hard to sell tools to people whose job it is to build tools, e.g. John Resig could never have sold jQuery because someone would have just written a feature compatible version for free. (edit, well the source is visible so that might be a bad example)

One thing coders do poorly is UX. Most things I’ve seen that are UX heavy done right are commercial and light-years ahead of open-source versions.


Mostly because they are really, really good, and they don't ask much for it.

I think I pay them less than $150 for a yearly subscription to their entire toolset, which is a price point that makes it stupid not to buy them. ReSharper is well worth a cup of coffee a week, by itself, and then there is DataGrip, WebStorm, IntelliJ, profilers, decompilers, and other stuff thrown in.


I also enjoy their universal subscription but I recommend trying Rider if you like ReSharper. Outside of small solutions, I find that ReSharper requires more patience than I can muster. The type-and-wait is significantly worse than I remember from classic Mac all-in-ones from the early/mid 90s.

Type and wait is literally the main reason I do most of my actual work in VS Code now instead of Visual Studio... I can't stand the full vs experience... code is just nicer imo.

What do you mean by type and wait ?

This is why Microsoft's strategy is so interesting. They're capturing the value of IDEs and GitHub (a company that struggled to monetize while solo) by soft-bundling them together with Azure, which is extremely easy to monetize.

If it works, all those cool VS Code features will be covered in your monthly cloud bill.


telemetry.

docker has a bunch of telemetry.

microsoft is gathering data: linkedin, windows, wsl, github, docker

who's who, and what are they doing, compiling, checking in, deploying


Docker was spun out from dotCloud(Heroku like service) and dotCloud service was sold. I'm curious what if they could monetize from dotCloud.

Next up: Microsoft acquires Docker.

That will make Microsoft own:

- #1 Code repository: Github

- #1 Package management repository: NPM

- One of the most popular editors: VSCode

- THE container engine: Docker

I have nothing against this btw, but just wonder at which point the government will throw the antitrust card


I'm pretty sure MS has to actually behave anti-competitively (and in the current climate, possibly show obvious harm to customers) before they'd get in trouble. It's not illegal to own 90% of a market; it's illegal(ish) to use that to shut out others, which I don't see them doing (yet).

(IANAL; this is one layman's understanding, not legal advice)


That's exactly what happened in their late 90's antitrust case. It wasn't the OS monopoly, it was them abusing their relationship with computer makers to prevent installation of other software.

Well, and I've heard a rumour that their unwillingness to say "how high" when the doj wanted them to jump wrt "lawful intercept" played a part as well.

Is there anything to back this up at all?

Honest question; how does Microsoft's ownership of those stifle competition? How is it anti-competitive? Just because they're popular doesn't mean they are somehow stifling competion.

Again, honest question, obviously IANAL.


> The idea of the integration is to make it easier, faster and more efficient to include Docker containers when developing applications with the Microsoft tool set. Docker CEO Scott Johnston says it’s a matter of giving developers a better experience.

What they actually mean is the opposite. They want to make Azure the easy choice when developing with Docker. They go on to say it can take hours or days to get the integration set up manually.

If Azure “just works” and the other clouds take days to get integrated, owning Docker would let them make Azure integration a priority while neglecting or sabotaging integration to other clouds. The gap gets bigger and Azure feels even more like the obvious choice if you want to get anything done.

IMO one of the major cloud vendors will buy Docker this year. Microsoft seems like the obvious choice right now.


> If Azure “just works” and the other clouds take days to get integrated, owning Docker would let them make Azure integration a priority while neglecting or sabotaging integration to other clouds. The gap gets bigger and Azure feels even more like the obvious choice if you want to get anything done.

Which is a playbook they've demonstrated as recently as last October, on Mover[1]. It used to allow you to migrate data between a whole host of sources and destinations, and the guides/help docs even still reference those capabilities. Microsoft acquired it and made it free, but have turned it into a one-way tool: the only supported destinations are now Azure, Sharepoint, and OneDrive.

[1] https://mover.io/


Totally agree with you. This is an attempt to make heroku for enterprise-scale systems. The risk, as a user, is that you end up locked into a ton of automatically configured, Azure-specific costs. It's a recipe for an anti-trust suit based on bundling (ie. the same situation that MS lost with IE many years ago).

Well, they did just make a bunch of previously paid GitHub stuff free. That's something only a company the size of Microsoft can do.

According to GitHub's CEO posting here [1], the costs are sustained by the paying GitHub Enterprise user base.

[1] https://news.ycombinator.com/item?id=22867808


Sure. Large companies make things free because they can subsidize them with other products.

The question is: did they do this to win market share? I think it makes _total_ sense for MS to make a bunch of GitHub services free. I also think it's harmful to competition.

One of the worst policy choices the US has made in the last 30 years is judging antitrust almost _entirely_ through the lens of "monetary cost to end users".


I would argue they made this due to competition with GitLab, backed by their working business model.

The less reasons there are for using GitLab (private repos was one thing which got me initially, using GitLab over my self administered servers to move code around and share with friends) the better for keeping their quasi-monopoly.


Gitlab has always had many of those same features- and more- for free- and they aren't as large as Github alone was.

seriously now. Microsoft built VSCode. It’s their fault that they managed to build something successful?

Isn’t VScode a fork of Atom?

no

as far as I know no. Both Atom and VSCode use Electron but VSCode is something MS build from scratch.

And Electron was built for Atom, so in turn Viscose uses the core of Atom, even though Electron (after VSCode started using it) became more independent.

sure. the point is that it’s not a fork of atom. if you use leftpad you’re not a leftpad clone

They both are electron-based, and when MS acquired GitHub, Atom took the worst part...

I doubt MS acquiring GitHub had a major role in Atom downfall. Atom's market share was already declining fast

i tried atom but it was atrocious when it came to even medium size files. vscode loads huge files without blinking. atom would just freeze

I concur with both and I made the switch along then because my atom was unbearable (although heavily loaded with probably unneeded extensions).

But VS Code has been a smooth experience since then.

I only brought it up because it certainly played a role on github having no longer a reason to invest resources in it having a better alternative being also developed in-house. And I also recall from other HN threads, much vocal users asking GitHub CEO about it, and why he stated just after the acquisition that Atom would be maintained even as Microsoft owning Github from there on... Which didn't happen.


IIRC Nat Friedman's words were something along the lines of "we'll keep developing it until there's enough people using it"

When vscode just came out all the developers in my company were using atom. Then a few of use started using vscode and it was a smoother experience. Atom would freeze all the time. Then slowly one by one people tried vscode and switched over.

Could any Atom users explain what it is that made you choose Atom over another text editor?

Can't speak for GP, but VS Code came out after Atom and Brackets both used similar approaches and both largely sucked. In fact, I almost didn't even try VS Code because of how bad the experience with Atom was.

In retrospect, I'm glad I did. The biggest features to me beyond a simple editor are a directory tree view, and an integrated terminal. I don't think anything had that, or at least not nearly as well working before VS Code that I recall. Everything else on top of that is just extras. Git integration, test extensions, integrations galore. In the end, VS Code has been a really great experience for me. It takes a little longer to load than the simple editors.

This doesn't even get into the Remote WSL/SSH extensions that I've started to make more use of the past few months. I've used vscode on Windows, Linux and Mac, and overall don't think I've been happier with a single editor.


I believe it’s based on Electron (which itself was part of Atom)

Parts of VSCode predate Atom. The core "Monaco" code editor was built for parts of Azure and battled hardened in IE10+'s Dev Tools. The biggest chunk of code that VSCode even shares with Atom is Electron, which is likely the source of most confusion that VSCode "forked" from Atom given Electron was still called Atom-Shell when VSCode started.

Nobody's assigning fault from that, it's just an observation about expanding market share across the realm of software dev.

Antitrust doesn't punish success, it punishes the abuse using the position that success brought. In the US, the Chicago School of Thought says a company doesn't do anything wrong from a market position until customers are charged more, so megacorps dumping free use of product on customers is not factored into the position.

In the EU, monopoly is gauged by how much choice and diversity is lost, even in the face of free product. I am very much aligned with the EU view of monopoly.

Dumping free services is great (temporarily) for the user, but ultimately harms them in that it starves out competition that can't afford such dumping.

*edit, words.


that’s a false dichotomy

IMHO, price is one thing you look at as a customer. If the value you derive is greater than what you pay for it you should definitely pay the price for the product.

people don’t use vscode because it’s free. they use it because it’s great.


I use vscode because it's free. I use eclipse because it's free. I use sublime because it's "free for evaluation" forever.

The convenient to download and install on any machine at any given time without having to deal with payment removes friction and it's a huge thing.


vim is free, and has a large user base. But even more people use VSCode because it's powerful, free, and easy to use. Meanwhile, folks are rebooting their boxes just to exit vim.

counterpoint: i pay for intellij because it’s great. eclipse can be free forever but that does not make it less of a dumpster fire.

Don't kid yourself with it. People use vscode, because it's free - first - and because it's great - second. It wouldn't have probably even 1% of it's user base and plugin writers otherwise.

There are tons of free options out there though. People chose to use VSCode from the list of free tools because it is great.

And I have no problem paying for an IDE, but I stopped because VSCode was as good, if not better, than any paid option out there for my uses.


If "free" and "has plugins" were what mattered, there is Notepad++, Atom, Emacs, and many other choices. VS Code is doing rather better than all of them.

I only said that price is an overriding consideration. It would not matter how much better it is, if it would not be free.

Are there not already viable free alternatives to all of MS's, as well as to many of their paid services? I don't see how MS is in a unique position, aside from being more vertically integrated than anyone else (excepting Apple, on a different range of products).

Users of technical software (I.e. developers, especially for websites) have never had it better.


> In the US, the Chicago School of Thought says a company doesn't do anything wrong from a market position until customers are charged more

> In the EU, monopoly is gauged by how much choice and diversity is lost, even in the face of free product. I am very much aligned with the EU view of monopoly.

Interestingly the US used to have the EU viewpoint as well, until the 1970s. Robert Bork was one of the main people behind that.

For those interested in a deeper look at Antitrust in America, Planet Money had a great three part series on the topic [1].

[1] https://www.npr.org/sections/money/2019/03/20/704426033/anti...


no i think it's quite a feat and admirable. I actually hope the government doesn't step in unless MS tries to take advantage.

it was just a hypothetical question.


They also own Azure, so you can run everything you build in their ecosystem. It’s quite the achievement, and likely their plan to beat AWS. Developers, developers, developers.

I am a huge AWS user. However, as a VSCode fan and generally happy GitHub user, I also regularly consider playing with Azure integrations just to see how seamless they have made the experience.

Some food for your thoughts?

You can connect one Excel File via Azure functions via another Excel file with seemless integration.


You can connect to two Excel files together using Power Query. No need for Azure.

Then Linux world move to systemd-container, cri-o or something more sane and the world keeps moving.

The move is well underway.

The Linux world is mostly on the runtime side of it, not the developer side of it. Both are important and docker will likely continue to see success with Windows and Mac users.

What's Microsoft's strategy? The AppGet saga has made me cynical.

Start selling Microsoft products to the bought userbases? But I suspect Microsoft is targeting companies rather than the developers themselves. Cynically, the drug dealer strategy? Get your (big, slow to change) customers dependent on the free stuff and then jack up the price?

Or bundle free software in a development environment to attract SMEs and open source developers. Then integrate and promote Microsoft's own propitiatory development offerings? Convert open source developers into Microsoft developers by owning the open source developers' tools of choice?

The tools will be maintained well. And also they'll integrate incredibly well with Microsoft's own software[1]. And with more Microsoft developers that means more Microsoft software sold to companies.

[1] Kotlin is a good example of this. Kotlin was integrated so well into Android Studio. I just pressed one button and, bang, kotlin. I never went back to Java.


if you're asking these questions, you probably don't understand how microsoft makes money.

MS already has a working business model (a very lucrative one at that) for enterprise, they can just integrate it into their existing user base and they will gladly pay. There's no hidden agenda.


My best guess: Expand the clientele of Azure from 'corporations and C# shops' to 'everybody and their grandma' like AWS has done - basically going after the long tail of cloud hosting. And I think, apart from improving Azure itself, they're also trying to do this by making sure the best possible developer experience is via Microsoft's developer infrastructure, so they get every chance to 'upsell' and integrate seamlessly in the tools you're already using. At the same time, they're trying to make cross platform development (via Xamarin and WSL) the most convenient way to target other OS's, so they're still in on the action when iOS, Android and Linux webservers are involved.

> basically going after the long tail of cloud hosting.

I want to really stress that your thesis is on point, but it's not about cloud hosting per se. If software is eating the world, and everyone is going to have some level of proficiency as a dev, Microsoft is targeting being the digital operating system of the economy. You write and run your business on Microsoft, whether you're a Fortune 500 or 5 people working out of a coworking space, and you can even get email, team chat, office apps, workflow automation tools, and everything else under the MS umbrella if that's what your business needs.


Developers, developers, developers!

https://www.youtube.com/watch?v=SaVTHG-Ev4k

I don't know if I should be happy about Microsoft going away from the Not Invented Here-mindset or sad that the best companies get owned by the biggest companies.


I'd mark it largely under marketing. As others have already said, Azure is Microsoft's real moneymaker. Forget about Windows and Office, they are a shrinking market.

In the end, who makes the decision on which tools to use? Developers and technical leads, not some CEO or middle management. Position yourself withing the developer community, built a good name, andbe sure they will consider other Microsoft services as well. Especially when they already integrate very well with you toolchain.


I don't even know what you are talking about. Even the highest estimates analyst estimates place Azure revenue at 20bn out of the 130bn MSFT pulls. Earnings-wise it-s even more lopsided

Yeah, the AppGet/WinGet fiasco is disappointing to say the least.

I think MS is mostly trying to ensure cohesion between the tooling and their Azure platforms/services. This is their longer term strategy... which has largely moved Directory services and Office to subscriptions models, and shoring up the losses of Windows Server deployments with rented servers on their infrastructure being the least friction and a revenue source moving forward.


Microsoft was famously against open source. Then open source prospered, and I'm sure some saw that as Microsoft's comeuppance. Yet Microsoft was able to shift some of their focus to become OSS friendly, and for those parts of the ecosystem that have a commercial focus, they can acquire what they want.

I think it was more of a slow transition and more focused on Linux. Once Linux become big enough to disrupt them that’s when they started to change . Microsoft research has been contributing to Haskell for a long time. I think they are moving forward a more service oriented company that doesn’t care what you do but will make it easier to stay in their ecosystem like Azure or just focus on getting more developer goodwill and lock-in .

They saw that developers are eager enough to do Microsoft's work for free. See .NET Core and its related projects where Microsoft accepts a lot of patches from outside contributors and sometimes needs to admit that others know their code and its sometimes decade old history better than they do.

I think the greater community would rather have it this way. Trying to get a bug in .Net 1.x fixed was not a pleasant or completely legal experience (peaking into .net assemblies, and decompiling to even get the to root of the issue).

Even if I only ever had to do it once... having the option is much better than not.


Except for github (which has at least one credible competitor in Gitlab), the important parts of all of these projects are open source, which means that microsoft don't really have any leverage. You can be sure that there'd be well supported forks if MS tried to take docker or VS Code proprietary.

There already are forks for VS Code (telemetry removed iirc).

Might be a good thing for them to get picked up by a company which would continue working to make the tooling better. Docker seems to have been struggling as an independent company.

This wouldn't be the sort of acquisition where MS takes what it wants for itself and kills the product. Not mentioned (unless I missed it) in the article is that Docker has been putting in a lot of work to make the tooling work well with WSL 2. So, having a foot in both Windows and Linux.


I don't disagree with your prediction - however, after Microsoft's infatuation with Linux these past few years and the impressions I got from the recent Build conference, I think the next purchase that Microsoft will make will be Canonical.

An antitrust case wouldn't really apply in a situation when all these services and tools have numerous alternatives and no one is compelled to use any of them in isolation or together. Not to mention the fact that a great deal of this things listed are open source, or offer a free tire. Hard to make a case for unfair competition practices of vertical integration when you give away those services and tools and alternatives are a click away, of not without costs from migration and loss of network effects for having to leave the established platform for a given ecosystem.

The real argument is that the ever increasing scope of these entities create moats far out from the walls of their code product to drive user inwards to the paid offerings and, either by design or happen stance, alt the earth for miles from other smaller rivals gaining any foot holds delivering a service to sections of the user base of these hegemonic organisations from which in time might see the smaller rivals use such traction to then offer competing alternatives of those paid services which hegemonic organisations are based on.


How they don't already own StackOverflow, I don't really know.

Anyone here using k8s locally for development? I use it in production, but docker locally. I'd like to actually just remove the Docker dependency if possible.

I quit using k8s locally for development once I discovered that podman can run the same Pod YAML as k8s. That basically replaced docker and docker-compose for me. I just have a thin bash script that launches postgres and redis for me for dependencies. If the app has a lot of deps then I might consider local k8s again, but until then I'm quite happy.

I run k3s locally (or k3d for disposable purposes) as a lighter way to get the whole api. Heck, k3s is even great deployed.

Title: Docker expands relationship with Microsoft to ease developer experience across platforms.

Content: Docker expands relationship with Microsoft to offer tighter Azure integration (that's my understanding from a skim of the article).

Not sure how that eases developer experience across platforms, especially for developers who don't use Azure.

I was hoping for better Docker experience on macOS and Windows — since we're talking about Microsoft here, at least Windows.

Somewhat related: GitHub Actions doesn't support Docker container actions on macOS and Windows environments, which sucks.


Docker for Windows is at parity for Docker for Linux, so I don't know what improvement you'd like.

Been using the new WSL2 integration with Docker since it became GA yesterday, ditched my VMWare setup and loving it so far.

For me on WSL2 keeps crashing on boot.

God knows why, I just reset it to fabric defaults everytime and open VSCode and hope it works, lol!


Been using it since late february, or early march on insiders channel... been incredibly happy with it and surprised how well it's been working for me. Both wsl and docker have been really nice overall.

I actually ditched Docker on Windows with the tight entanglement with Hyper-V. I need access to multiple VM environments and Hyper-V locks you down preventing usage of VMWare and VirtualBox. Moved to docker in Linux VM.


... slowly :)

Could you clarify what you mean by slowly? The execution speed of the VM? Or the speed at which they're adopting execution on top of Hyper-V?

I haven't noticed any performance issues with Hyper-V (when Hyper-V is enabled, even the Windows OS is running within Hyper-V). Does something with the Windows Hypervisor Platform that is used to support other VM hosts running on top of Hyper-V introduce some type of performance hit?


Probably exec speed of the VM. From the changelog:

> Added support for using Hyper-V as the fallback execution core on Windows host, to avoid inability to run VMs at the price of reduced performance


Honestly I think the biggest issues are:

Unlike hyper-V itself, where some VM exits that return to the host are implemented (for speed reasons) in kernel drivers using undocumented APIs, the supported Windows Hypervisor Platform API is 100% userspace. (Unless this has changed since the last time i looked into this, which was admittedly a fair while back.)

That means that literally every time an instruction is run that needs custom software backing, you not only get the VM-Exit to the hypervisor and then the VM Enter back to the Host (DOM-0) kernel, but also a kernel mode to user mode transition. Then you may well need to query some things in order to know how to handle them which may incur more round trips to the windows kernel.

That all is bad enough that virtualbox wrote a kernel driver that makes undocumented calls from in kernel mode in order to handle certain operations fast enough to be tolerable when doing things like running Windows 95.

The virtualization software is limited to supporting any scenarios that Microsoft decided to support, driven mostly by the needs of the Hyper-V experience. Windows Hypervisor Platform users do not get to directly set the vm control values, so any values in them that Microsoft does not offer some API to set are limited to whatever values Microsoft chose for Hyper-V.

This can make certain things literally impossible, or require really nasty work around that have significant performance impact.

Certain simple VM exit scenarios will get handled directly by the Hypervisor without involving the host OS. If the default behavior is not good enough, there might be an option provided to handle them by re-entering the host OS, but obviously that is much slower.


If you're already using a VM why do you need docker?

I'll admit to not understanding docker. Is it a VM replacement, or is it a glorified debian package?


What are you wanting in VMWare or VirtualBox you don't get in Hyper-V? I'm just curious... In general with WSL2 and Docker integration, I've been exceedingly happy myself.

Oh no i love Docker

It was rumored Msft wanted to acquire Docker for 3-5b, but was turned down by docker. It was understandable after they enjoyed the overnight success (there is rarely a better example than docker when talking about overnight success).

In the end, they underestimate how complex is the market and how limited is their expertise. Retrospectively, Diane Green's move to sell VMware, actually more likely gave the firm a long shelf life, in contrary to Docker.

In my leisure time, I started ab article why docker is destined to fail as an independent entity, but sorry no time for writing while incubating a start-up...

https://www.sdxcentral.com/articles/news/sources-microsoft-t...


This makes a lot of sense to me.

Most of Docker's products are being superseded by other tools in the linux/enterprise world (buildah, podman, kubernets, etc), but the one area I think Docker still provides actual value to the industry is their Windows Docker distribution. I'm a developer running linux on a team filled with Windows users, and the fact that we can all share dockerfiles is a big benefit. Without docker there's a good chance I'd have to switch to windows or just find a new job.

Strange as it seems, I think the industry is moving such that soon enough most actual Docker users will be on Windows anyways. So why not just embrace that?


Though AWS ECS is run on docker and will be for the foreseeable future.

Actually ECS Fargate 1.4.0 just switched to Containerd!

“Containerd is replacing Docker as the container runtime”

https://aws.amazon.com/blogs/containers/aws-fargate-launches...


true, I should have specified "ECS on EC2", that will not likely be changing anytime soon.

Let’s not forget Microsoft also bought Deis. That helped shift the Azure container service from Mesos to Kubernetes. They also hired Brendan Burns.

Docker democratized the creation of containers, but other companies built more and better services on top of that.


docker seems to be dying slowly. another great piece of technology most people are not aware of, that are looking for a docker alternative is LXC. the ui feels clunky at times. but the containers work & are persistent.

LXC takes quite a different approach to containerization compared to Docker (e.g. running full systems in containers by default, instead of a single application process)

What is it about their approach that you feel is superior?


Maybe Docker will actually work decently on Windows.

Although I'm going to scream if it just gets jammed into Hyper-V.


WSL2 uses Hyper-V and Docker with WSL2 integration has been exceedingly nice imho.

Honestly, Microsoft should just buy Docker and ends Docker misery forever.

And Microsoft seems to be the best fit when comparing to AWS or Google as potential acquirer.


What's funny, is before they were bought out by Samsung, Joyent had a REALLY smooth interaction and deployment model for Docker.

Legal | privacy