No thanks! I'm not buying a phone that won't get updated with security updates. My trash box is littered with Korean Android phones that have no upgrade path or ability to clean with a fresh install.
I'm opting for a Pixel 2 and an iPhone X for work and personal. They're expensive at first, but way cheaper and far more convenient in the long run. They always have immediate security updates, and they update their OS for atleast 3 years. Korean phones are rip offs, they barely update at all. Every single one of my Korean phones are still waiting for security updates. It costs way more to own a Samsung or LG phone in the long run.
You sound like a bigot, dividing phone manufacturers by country. Nation and industry are not so close in Korea that you could conflate any one company (especially any one subdivision) with every other similar company in the country.
The Pixel 2 and iPhone 8 are as Korean as a Kia. Surely you should not buy them, since they contain that pesky Korean commoditized OLED technology.
It doesn't matter who the OEM is. All Android phones are now wrappers around the Qualcomm snapdragon SOCs if they're sold in the US. They all have the update problem because Qualcomm has refused to work with other organizations or maintain their kernel forks.
It's been getting better recently as third parties have taken on the work of maintaining drivers for Qualcomm chipsets, there are now a lot of their chipsets which can run on upstream kernels, and if Android kernels were closer to upstream, AOSP kernels could probably be built to function on some Qualcomm chipsets today.
Android doesn't expose a proper Linux to userspace and Google has been clapping down what we are allowed to do with the NDK since Android 7.0 (edit: corrected, only started in 7.0).
So playing "what if", you can replace Linux with something else that is compatible the NDK APIs and no one would notice, other than the OEMs.
Maybe 2018 is finally the year of the Linux desktop.
Wait... :)
Jokes aside, I can see how this would be useful for sysadmin and devs - bring along your smartphone and you're set - but this would never fly for a general, even if geeky, public. Very nice approach though, curious to see where this ends up going.
From experience, I now pause and consider twice any statement I make using never. You know, I thought Facebook would never become dominant using such a one-size-fits-all interface where I can't even choose a theme or even my color, let alone which elements to be displayed and in which order (I keep thinking their UI is ugly to this day, if this was our 'home' it feels it looks like a housing project on internet to me). Or back in 2010 when I bought an iPhone 4 and later an iPad 2, I thought Apple would never keep iOS so dumbed down compared to OSX (e.g. "Smart" features, a great first step into intuitive automation, the real graal of computers imho, lest one thinks it's OK to become a robot and click/tap thirty times to perform thirty times the same operation).
So... yeah, I don't know. Everything says you're right today, but with just the right tools/apps, a good Linux distro tailored to popular phones could reach the same level of popularity than, say, iPhones (currently ~10% of the market, roughly one in ten Android phones).
To be fair this is already what we do. Android already IS linux. You can install plenty of scripting environments like Ruby or Python while having access to the underlaying Linux system which is only partly crippled.
Personally i see this more as a marketing thing from Samsung than anything else.
A shitty, half-assed linux. Until I can just put Debian on natively, and use apt-get for packages without a problem, I want us to keep moving to an actual linux on the phone.
I would love to get away from Android, but this is helping as much as Ubuntu on Windows is helping to get away from Windows.
If you'd want debian natively this also was possible for a while, next to the option to have it run on a compatibility mode on top of your android kernel but does not even require root. Both methods supporting apt-get, X, and whatever else runs on ARM.
Then there is also the Ubuntu phone project (which i assume is pretty much dead meanwhile) which has no Android in the mix at all. Except a possible compatibility layer for apps.
Also its not a half-assed linux. Its just a linux that uses weird libraries to build their environment on which are not compatible with your desktop like environments.
> I would love to get away from Android, but this is helping as much as Ubuntu on Windows is helping to get away from Windows.
I'm not sure how much this helps, I'm just replying to the idea that we "Android already IS linux". That is technically true, but not practically true. I also don't want Debian to run on top of the android kernel, but I want to use Debian's kernel. The android one is pretty neutered and crappy.
And yes, it is half-assed linux precisely because it's intentionally not compatible with desktop-like environments. Android has different goals than I want out of a phone, and that's perfectly fine, but that doesn't mean I have to be happy with Android as my only option (excluding IOS as that is a different beast altogether).
Just because you are OK with it doesn't mean that I have to be, and every step further we take from Android, the better, IMO.
Samsung also has a webkit-Javascript based SDK as well. It's actually pretty good as it exposes a lot of OS functionality in the JavaScript API. It might be the best option right now.
It's quite interesting that with the power of today's mobile devices (multicore 1GHz+ CPUs, GPUs, 2Gb+ RAM) we still don't have a standardized interface to plug them into desktop peripherals and launch a desktop OS/GUI.
One would thing that such an universal interface would quickly become a standard offering in airports, hotels, libraries, conferences, etc.
There was: the Apple 30-pin connector. It was revolutionary and got built into plenty of hotels where you can still see them. And where it acts as a warning to not build tech into your furniture again.
People keep building phone dock systems and they just don't really seem to take off. Maybe the best one available today is the HP Elite X3 .. running the discontinued Windows Phone.
Like the iPod itself, it's not a question of new functionality, but better usability/popularity of existing functionality. The revolutionary aspect was that you could just drop an iPod into the dock and it would start playing your music through the dock's speakers, while also charging. This didn't require the dock to be "smart", it was analog audio.
Whats the problem with USB3 not carrying analog audio? Instead of placing the DAC in the phone, we now place it in the audio device, it makes no difference. One could argue it's better, as we now have choice in the quality of the DAC, instead of being stuck on the one in your device.
The Apple 30 pin connector was introduced in 2003 when USB2.0 was all we had, and it's a right pain to enumerate a device and persuade it to start sending you digital audio over USB2. Whereas two analog pins Just Work.
Moving the audio to all-digital moves more NRE cost to the devices. It also means there's a potential for the "choice" of rubbish DACs to save cost.
Actually, they only work when they follow the standard physical layer. Try plugging a Line device into a Mic input, or a Line device into a speaker. Or an unbalanced into a balanced jack. It'll "work" but not "correctly."
And, just like saving money with "rubbish DACs", there's room for designing complete crap, noisey audio paths (and noisey amplifiers) in products.
If people could actually agree on a standard (USB3 seems a good choice), sending audio over a digital port seems fine.
The ONLY real problem, is digital audio isn't digital audio. It's a protocol with MANY audio formats. 44Khz. 48Khz. 96, 192khz. 2 channels, 4 channels, 7.1 channels. 16-bit, 24-bit, 32-bit float. Compressed with different compressions.
THAT'S where the real shitshow happens. Because now you're basically in the oldschool Modem situation where you have to try various standards and HOPE that the sender and the receiver both have compatible formats and in failing that, falling back to some standard required format like "stereo 44Khz PCM".
The number of ways analog audio can work (and equivalently, fail) is small: Mic vs. Line, Power vs. Voltage, Voltage vs. Current, and Balanced vs. Unbalanced. In practice, it's not a big problem because of connector conventions and smart signal detection.
Contrast this situation with digital audio, where the number of modes is exponentially larger by your own admission. Analog audio is more reliable.
This doesn't mean digital couldn't be more reliable; it's just that it hasn't typically been designed with the necessary metadata as part of the protocol.
>>This didn't require the dock to be "smart", it was analog audio.
Not smart, just have a 30pin connector. Having DAC on board would be trivial (esp. after paying the royalties for the connector). So calling it revolutionary is just being an Apple shill.
Accusing others of being "shills" is a stupid tactic that does not belong on HN. Nobody's going to be shilling a connector that was introduced in 2003 and is currently obsolete.
Yes. That's why I said until USB-C. Which came out after Apple had already obsoleted the 30-pin connector. At the launch of the iPod it was not possible to do analog audio over USB2.
I can't help turning this into a headphone-jack argument:
I was able to plug my phone in at the last hotel I stayed in because its ageing 30-pin dock also had an aux plug, which my modern phone still supported. It also seemed a lot better for my device security to not give some random active device direct access to my phone.
USB-C is that standard interface - when working at my desk I can plug a single USB-C cable into my laptop and it gains two monitors, external audio, keyboard, mouse, webcam, and all the other things you'd expect.
I don't see how you can say that, unless you've not actually used Windows ME and/or 10 for any significant period of time.
Windows 10 is very well polished. It's also the first windows where most of the user's gripes are stuff added on top of it (telemetry, simplified/restricted update system, ...).
Windows ME, by comparison, was so bad that merely leaving it running would end up causing PC Health crashes.
Well, aside from Windows Defender steadfastly turning itself off because it thinks I have a different AV package installed (I don't) and "helpfully" redirecting me to uninstall the offending software first. Only there is no other AV package to uninstall.
I have so far been absolutely unable to find any sort of resolution to this issue, short of a full reinstall.
But the kernel and system side are miles above ME. We're talking about a time when you could turn a system on, sit there and watch it for a couple hours without touching anything, and it would crash itself.
The controller/driver side of USB-3 is still in a miserable state. USB-3 is the source of endless frustration both at the lab and at private computers. Issues range from:
USB-3 controllers won't initialize reliably if there's a single low speed device conntected, like e.g. the keyboard/mouse I've connected to the USB-3 hub my my monitor, which I'd like to connected to a USB-3 port of my computer so that I can plug thumb drives into the side ports of the monitor.
If the do initialize they take ages to go through POST.
After waking from S3 it takes several minutes(!) for all devices to come up properly. Often the OS drops the controller into USB-2 mode, because keeping it USB-3 creates interrupt storms or similar.
Just having enabled the on-board USB-3 support without anything connected may cause random POST hangs.
All of this happening on pretty recent hardware (late 2015, but still available for purchase in that configuration as of today) with latest BIOS and drivers.
And that's without the issues USB-C introduces (alternate modes, power delivery, which requires driver support, etc.).
I noticed an interesting case when running Windows 7 with KVM and passing through one of my USB-3 controllers. The controller has a single port and a 3.0 hub connects to it. When I connect a USB-2 hub to the 3.0 hub, DPC latency spikes through the roof because of the interrupts. Even if there is nothing plugged into the second hub.
I wasn't sure if this was an issue with KVM, USB-3 or just a general issue with nesting hubs (even though I'm running low speed/power devices and a 3.1 port should be able to handle it) but you make it sound like it might indeed be an issue with USB-3
In the mean time you can buy a phone with MHL output, use BT for keyboard/mouse, and wifi for data. Root the device and dual boot Linux. By the time the phone is obsolete, USB-C will be stable.
As for Apple products, it's available on the desktop (using Intel CPU's/chipsets) but not compatible with their iOS devices. So it's far from "universal" given that iOS devices are an overwhelming majority of their sales.
My intuition is that USB-C standards will eventually replace most Thunderbolt interfaces (e.g. for eGPUs etc.)
You guys are talking about the same thing (and on the other side, not, but you don't realize how): Thunderbolt 3 uses USB-C, because USB-C is the connector spec.
If I understood it correctly, Thunderbolt at the same time can "carry" USB. So basically if you have Thunderbolt 3, you also (potentially) have USB.
A non-Thunderbolt USB 3.1+ USB-C can via Alternate Mode (depending on devices and cable) carry e.g. a DisplayPort A/V Signal over dedicated lanes (but has not nearly the same Bandwidth as Thunderbolt, so restrictions apply).
So: "Every Thunderbolt 3 is USB-C, but not every USB-C is Thunderbolt 3".
Yes, it can get confusing, but essentially you both are arguing about A is better than A, because Thunderbolt 3 uses USB-C. The "fullest" features set though you only get with Thunderbolt, for now at least.
Interesting. Does this work for USB-C enabled laptops, say the latest MacBook Pros? Or does Apple only support Thunderbolt over USB-C for display connections?
It's the natural desire in a free market to gain monopoly status and the desire to decrease the support matrix. Therefore most smartphone companies will try to log you into their own choice of OS. It's not an asshole move, it's just the most logical conclusion given the context they are in.
The way to get around that is supporting such kind of project (like fairphone, ubuntu mobile, linux on samsung) and gaining a true majority in the user base. However that is unlikely since for most people it's good enough to just run instagram, whatsapp and spotify. And for that the OS really doesn't matter.
*edit: I thought about something one can do also. Not an expert about these, but these very flat laptops like macbook air, I think they are underneath the keyboard also a smartphone/tablet. Buying these may increase the desire to create smaller and smaller computers. If what is labelled "computer" gets as small as a smart-phone that's also fine for our usecase, but it may be easier to digest for marketing people of these big corps.
We are almost reaching tipping point here - my laptop is still more powerful than my phone, but only just. If USB-C can really support this connectivity (2 monitors, plus keyboard, mouse and a couple of usb ports), a really great implementation could work well.
I think Apple are the only company that can pull this off right now, everyone else is too fragmented to get the hardware/software integration right, and popular enough for mass adoption.
If they release it they could tigger a whole new era of computing. e.g. Imagine going to work without having to bring a laptop along.
My phone has already replaced what I used to use my laptop for, but it's not going to replace my work PC anytime soon. Compiling our work C++ proejct takes about 40 minutes on a 32 core workstation, and my mobile phone isn't going to replace that right now, even if I can connect some peripherals to it.
Hmm something like eGPU, but for "real" CPUs, would be very nice... and interesting. For example, using a low-power ARM core for "daily" work on a laptop (e.g. browsing, text processing), and engaging a heavy-duty x86 CPU only when needed, e.g. by x86-only software...
most may not, but many do. There are also many workloads that require a discrete GPU running at medium to high load constantly. Granted I have an older iPhone, but my battery life drops from ~1 day of normal usage to ~1hr of intensive usage, and generates an _incredible_ amount of heat.
I dont think this will ever be a thing. For me phones are (sorrily) throw away products, usually not build to last. Computers are expandable, upgradeable machines that you build for your use case and have for years without having to worry to much.
Even mac folks have their Macbooks at least twice as long as their phones.
So i dont see this happening for any kind of professional use anytime soon. But who knows.
Disclaimer: My last phone (downgraded recently, because battery life > 4k screens) is more powerful than my Laptop.
You didnt. And some others as well. But plenty, especially professionals, still do. Not to mention that nearly all laptops except Apple & Microsofts (which are in minority) are upgradeable to some degree.
I have a thinkpad (x1 carbon) and it's definitely not upgradeable :-)
The people who will buy a phone dock are probably also the same people that value ultra-lightweight devices, so I don't think you're in the same bracket as them
Good point, the reason i am not on a X1 yet (even thought its so fucking sexy) is that i depend on good battery life and can not buy something without replaceable battery. However this already disqualifies me as general consument i guess.
Plus I think we got to a point where those upgrades are not totally necessary unless you work with cutting edge software, my MacBook Pro from 2013 still works perfectly, and my windows HP from 2014 too... I'm a developer and those two laptops work almost like the first day I bought them without any upgrade, some years ago even Microsoft Office's newest version would slow computers...
I have a Thinkpad from 2011 so i know what you mean. However i would argue 2013 and 2014 are not old enough to worry yet. Most better laptops already were > Sandy Bridge and had 8 or 16GB of RAM. Most also already equipped with a SSD.
If you look at my 2011 Thinkpad however, it was shipped with 4GB of RAM and a 512 GB analog disc. Spending only like $200 extra it turned into a 512GB SSD + 16GB RAM + new battery beast which outperformed my 2014 MBP for nearly all of my development tasks.
So i guess it depends on how you look at products. I tend to buy things i know i wont have to replace anytime soon.
My five year love affair with MacBook Pros ended after the second time in two weeks I spilled liquid on one. I went out and dropped $2k on a desktop and a desk to put it on. I'm sorely tempted to switch to a Mac Mini for my work machine.
But for work you probably want a more powerful computer...
So for us devs it would mostly be for non-work stuff. And then you have the problem that you'd have to carry around external hardware with you or find places that offer it for use.
AT&T killed that idea fast 6 years ago. When the Motorola Atrix came out with the ability to do that AT&T charge a $20 tethering fee to use the laptop dock with the phone! That blew my mind. The phone wasn't tethering data to anything but itself. https://www.wired.com/2011/03/motorola-atrix/
As it is with most countries in Europe and I think many in Asia as well (correct me if I'm wrong). This is just an example of America's hardcore capitalism where certain markets are dominated by a few key players and the only entity capable of stopping them from enacting these batshit insane anti-consumer tactics are congress who are more than happy to take their bribes whilst shafting the average American consumer.
chuckles I had an ex-gf who worked on the customer service side of Sprint and we went round and round about this. There must have been some serious corporate messaging to their employees to justify tethering fees.
I will say, in the carrier's only defense: when the market demanded everyone sell vastly oversubscribed "unlimited" plans at rates that were far too low to support them... then prohibiting tethering had an argument. (As anything that promoted higher data consumption led to directly increased costs)
Now? With "unlimited w/ cap" or "data-by-the-GB" plans? Absolutely a ploy to screw customers.
Tethering fees made sense when the ability to tether was added to smartphones (officially.) The original "Unlimited" plan people signed up for was intended to be limited by the rate that the phone itself could use data, which was slow.
Tethering fees on limited plans are incredibly stupid, and just a cash grab by carriers. Luckily most major carriers have eliminated them now.
When I moved here I was surprised to know that I had to explicitly ask AT&T to enable tethering. In India, it is enabled by default and is part of your subscription, you just pay for the data depending on the plan you choose.
with AT&T, there is tethering fee on everything. I'm glad that I'm no longer with AT&T though. I switched to Tmobile last summer -- my bill went down from $200/month to $130/month for three unlimited line with 10GB tether/month.
In addition to Samsung's own smart docks, Sentio is doing something like that with their superbook. http://www.getsuperbook.com. They had a dock as well for a while.
Phones with 8gb of ram like the Oneplus 5 make this more and more appealing to try out.
"One would thing that such an universal interface would quickly become a standard offering in airports, hotels, libraries, conferences, etc."
My favorite computing peripheral form factor has always been the PCMCIA card.
It seems to me that you could fit a minimum viable phone inside a PCMCIA card and then insert that PCMCIA card into a normal sized laptop when you choose.
The laptop would have no CPU/RAM - it's just a big USB keyboard and monitor - and the phone would probably have limited battery life, given the size (although you could bump out the non-port end of the PCMCIA card with some extra battery, the same way that wifi cards had an antenna bump ...)
Something like this was proposed ... there was some minor project where they had built an entire PC (not a phone) into a PCMCIA card (although with different pin-outs) and proposed using it as a portable "guts" for any laptop "host" but I can't find the URL for that project anymore ...
I certainly would welcome a phone the size of a pc-card and a cable-less docking of the brain into the host laptop would be much better than either 2-3 USB cables or some weird, proprietary phone dock ...
The use-case is in the opposite direction: when I only have my phone, I have my entire computer with me - including the storage and the files and keys and so on.
No syncing required. You only have a single computer. That's a big win, I think ...
If you care about the stuff on that computer, you really want it synced somewhere anyway.
I don't personally see the utility in having my phone be my computer. I don't need Visual Studio or Photoshop on my phone. If I don't have my laptop/desktop, I'm not going to work on the things I typically do with a full-power computer. The phone is just not the right form factor, so it doesn't help for that stuff to be on my phone.
"If you care about the stuff on that computer, you really want it synced somewhere anyway."
I disagree.
If you care about (stuff) you really want to back it up somewhere.
This is different than "syncing" which can mean anything, is usually a completely unintelligible process for the end user, is fragile, and is actually a hard problem.[1]
Much more intelligible and manageable is to have one single repository of data and carry that "kernel" of stored data everywhere. Yes, certainly you should back it up, but the backup is just that: a point in time backup that you do not operate against.
[1] Two way sync, dealing with new, but different objects on both devices ... this is not a "solved" use-case ...
> This is different than "syncing" which can mean anything
Then we should probably define it before asserting that it's "completely unintelligible", "fragile", or a "hard problem", right? Yes, full multi-way sync is not a simple problem. Most scenarios don't need this (and even ones that do tend to devolve into simple cases since single-actor conflicts are not that common).
The simplest case for "sync" is just "my stuff is in the cloud". Call it remote storage, since sync is ambiguous. Make the client dumb, put the data online, and most of the complexity evaporates. Of course, local storage with remote backup is also a reasonable solution that has different tradeoffs.
For me personally, I prefer the "remote storage" solution for most things. I greatly appreciate that my email is just "magically" everywhere I need it to be without me carrying a repository of email in my pocket. I love that all my important documents and photos are accessible everywhere even if I forget my phone.
Actually, on many if not most devices you can just 'cast' the entire display to a Chromecast, which is inexpensive enough that it's unlikely anyone's going to bother developing a replacement mechanism. The nearest thing to that is MHL, which is apparently encumbered by licencing issues and requires use of the USB port (which means that an OTG adapter for attaching mouse/keyboard is now out of the question).
I think termux ( https://termux.com/ ) does most of what I need when I need a bit of emergency Linux on my phone.
Termux provides a recompiled debian distro which runs as a android App. It doesn't chroot or need root and it works amazingly well. No desktop apps though.
It’s more useful to have standard Linux installed, unless you’re incredibly sensitive about security. I treat Chromebooks as cheap Linux machines. You can buy a brand new netbook, and you know Linux will run on it with no driver issues. It’s still chroot, but you get everything you need to do development.
Yeah, but you need the actual programs, and when I last looked termux didn't build packages that use X and didn't include the libraries to compile them yourself (or at least I couldn't find them)
Some software could probably be compiled on a different ARM machine. Or would work when distributed in binary form. (Just speculation, but i dont see why not)
It runs X, using the XServer XDSL app previously mentioned in this thread. I found some things rather quirky, but certainly fun to play with. My older Nexus 5 could do HDMI out with a dongle, add in BT keyboard and mouse, and it's mostly usable.
Doesn't run on Android 8.0 yet, and no HDMI out on my newer Pixel, so it's Termux + keyboard for on-the-go hacking.
Linux Deploy (also on Google Play) for those who have rooted their devices looks to be another choice.
I followed the link, but don't quite understand the product. Does Ubuntu Touch OnePlusOne replace the OS on the device?
So I buy one of the supported phones listed, and then erase and replace Android with this in it's place? Or it installs on top of?
It replaces the OS, so you will lose everything you currently have on the phone. The installation process is similar to flashing a ROM and you can choose to reflash and move back to Android in the future.
I've been using termux and a fold out bluetooth keyboard for django development & code review for quite a while on my pixel XL.
I absolutely love having my dev work on my phone, being able to hack away or do code review for 5 minutes wherever is incredible.
At home using a chrome-cast and have it on a big screen.
It would be nice to have a window manager I suppose - but i'll probably end up in a full screen terminal anywya.
When mobile the biggest problem for me is just plain old screen size - I'm tempted to get a cheap Chinese tablet and use it as a remote screen somehow - leaving my phone in my pocket.
Yeah, screen size sucks. Didn't Intel build some wireless screen technology a few years ago? Somehow I never saw it in action, but as far as I can imagine I would like to have a >24 inch screen which I could use as a normal Display Port monitor, but also via NFC (+ some magic wireless technology) as a wireless monitor for my smart phone.
Does anybody know where I can buy such a device or why it is not yet available?
If you're going to carry a tablet and a BT keyboard, you might as well just get a light Chromebook instead. While I can run Linux locally via crouton, I mostly just SSH into my workstation and resume my screen sessions.
Depends a bit on how small you can get a chromebook. After all, the smallest tablets are in the 7-8" screen range, while most chromebooks i have seen are 10"+.
And it also depends on how small a keyboard you feel comfortable with. 11" screen size with corresponding keyboard is about as small as I can go and still type at full speed.
As often happens, I don't understand why one would want to subscribe in order to be notified when it becomes public.
I mean, I can understand subscribing in order to get a beta or pre-release, but if right now nothing is available and when it will be available it will be public, it seems to me nothing more than a personal data collection.
When it will be public there will also be a lot of publicity about it, so it's not likely that one needs to be notified.
>The whole point is not to notify you, rather to get a feeling how much people would eventually buy it.
Sure, but I wasn't expressing a doubt on why Samsung put that on, I was doubting why an end user would want to give away his/her name/email/company, substantially in exchange of nothing.
I did something like this (shared kernel and chroot) on an old Nexus S back in 2012 with Gentoo. You can use a precompiled stage3 tarball which contains the fundamental filesystem layout and utilities (for the target architecture), and then for anything else that you need, you just use Portage (Gentoo's package manager) to compile stuff for you. Not super practical in terms of battery life and time if you're constantly compiling packages but a cool project nonetheless. This wasn't the exact guide I used, but it's the same idea: http://thinkmoult.com/installing-gentoo-android-chroot/
[OT] Does anybody have any idea when the Windows x86-on-ARM phones will come out? That's what I'm really waiting for and not seeing much information on.
We need phone drivers! To enable us to install any distro or OS.
I would love to download X distro directly from their website > install on the smartphone somehow > boot > install phone drivers > make calls
I have nothing against Android. I would like to choose who creates the OS used on my smartphone
> "I would love to download X distro directly from their website > install on the smartphone somehow > boot > install phone drivers > make calls"
Unfortunately it will never be that simple. First, the distro you want would need to have an ARM port targeting the same version of ARM SoC in the phone. ARM isn't like x86 with just two targets (32 bit and 64 bit), it's a quagmire.
Second, your distro of choice would need to stick with the exact kernel version used by the phone, and I guarantee if you take any three Android phones by three different manufacturers made in the past two years, you'll see three different kernel versions, each with tweaks made just for that device. Said tweaks would make it impossible to have a distro targeting all three devices even if they did have the exact same kernel version.
Third, your distro maintainer would need to own or have access to every Android device she wished to target, and while that works for a community driven project like Lineage OS, it's not realistic for the lone Linux distro developer or small team who wants to add phones as targets for their existing desktop class distro.
>Unfortunately it will never be that simple. First, the distro you want would need to have an ARM port targeting the same version of ARM SoC in the phone. ARM isn't like x86 with just two targets (32 bit and 64 bit), it's a quagmire.
People say this a lot but that's not my experience. I have a lot of ARM machines and move binaries and complete distros (sans kernel) between them all the time without recompiling. Everything after ARMv7 is pretty well behaved.
> Second, your distro of choice would need to stick with the exact kernel version used by the phone, and I guarantee if you take any three Android phones by three different manufacturers made in the past two years, you'll see three different kernel versions, each with tweaks made just for that device. Said tweaks would make it impossible to have a distro targeting all three devices even if they did have the exact same kernel version.
Would this apply for the new Purism phone? i.e. if all the drivers are upstreamed into the mainline kernel, does thstnot mean you can use any kernel past the point where the drivers are included, without modification?
Who said it will be Ubuntu? If this is anything like past attempts at a Linux desktop running from a mobile phone, it will be a Samsung optimized (i.e. locked down) half baked chroot monstrosity with limited tools and locked to the phone's ancient, bastardized kernel with zero security updates from launch forward.
I think it would be awesome and amazing if I was completely wrong about all of that, but I'm not holding my breath. I'd rather wait for the Pyra to release and carry two devices than torture myself with an impossible dream.
Elsewhere in the thread I saw mentions of it running Ubuntu. I agree with you that it's unlikely that they'd give us any freedom, but if they did it'd be great.
There's a picture linked elsewhere in the thread of a Samsung device running Ubuntu at some convention.
> Linux on Galaxy allows the latest Samsung Galaxy smartphone users to run their preferred Linux distribution on their smartphones utilizing the same Linux kernel that powers the Android OS to ensure the best possible performance.
(emphasis mine)
I knew there was a catch somewhere. I seriously doubt there's a technical reason why older Galaxy models can't support running Linux as well. I don't understand why it's so difficult for Android manufacturers to allow users to install whatever they want–I bought my phone, now let me install what I want. Sure, void the warrant or refuse to support it, but don't get in my way.
We're all used to having desktop computers with generic x86 compatible processors and highly standardized internal interfaces and components, and compiling our software and installing it on any x86 computer we want.
Smartphones with ARM SOCs aren't like that because they aren't just a CPU, they also include a crapload of additional system components. Even SOCs like the Snapdragon within a specific model will offer many variations to the manufacturers. Outside the SOC itself, phone hardware is far less standardized than on a PC. You can't compile your Linux distro for ARM then install it on any smartphone, the kernel needs to be tailored to the specific phone. That's why even though unlocked Android phones are around it really takes the manufacturer themselves to be able to do something like this because only they have the detailed understanding of the platform and the resources. Otherwise, other people would be doing it.
In a perfect world manufacturers would give out this information, but it's understandable that they wouldn't (this doesn't mean that I don't think they should, but I know why they wouldn't). The issue is with "locked" bootloaders–the manufacturer is going through extra effort to make sure you can't run another operating system on your phone. This is inexcusable since it's not just a passive lack of helpfulness, it's an intent to make sure you can't have full control of your phone.
The closest to a "generic" smartphone is probably the Mediatek reference platform(s), indeed found in various Chinese smartphones. Except for presence/absence of certain peripherals like sensors, those tend to be almost identical in hardware and software. Source is unfortunately not officially available but occasionally leaked along with (extensive) documentation, and there's a small community of developers who work on these cheap devices. As a bonus, they also tend to have a completely unlocked-by-default bootloader.
The Linux Device Tree infrastructure was created as a hardware description for hardware that cannot describe itself at runtime, which is mostly ARM SoCs. However that only works when hardware drivers are wired up to the device tree infrastrucure instead of being fully hardcoded. Given the usual bare minimum investment in software by most embedded hardware vendors, many/most drivers probably aren't.
Given a set of hardware with the same instruction set and drivers with full device tree support, one can now create one kernel for the whole set.
> the kernel needs to be tailored to the specific phone
Out of interest, would it be possible to operate an ARM SoC system like a generic x86 system, if drivers were upstreamed? Or is there too much SoC specific code?
The problem is that in many cases there's no nice automatic way to probe for devices, you just have to know the board config ahead of time. That's why device tree is so great, if the hardware manufacturers cooperate and upstream their code then you'll end up with something like the IBM PC.
You can actually compile the userspace (which is more or less most of what is meant by a distro) and install it on pretty much anything. I do this a lot. The kernels have to be recompiled but that's trivially easy.
Because the effort to make it possible is non-zero, while the userbase which would actually use it is close to zero. In my experience of enterprise software development, anything that costs more than 0 and is not essential is just not getting done, because....well, why would it be?
Jeez man. They want to do something extra cool and you shit on them for not openly supporting old devices. Nobody is stopping you from rooting your device and installing whatever you want on it.
I don't think it's just that, the point is that under the current Android HW situation, phones are locked to kernel versions. Any necessary kernel updates after release are then manually back ported to that kernel version. So that means you won't be able to install any Linux system, you'll have to run that provided by Samsung, running their own custom kernel.
The kernel sources are all out there, the kernel part of the BSP is GPL by necessity. Samsung actually makes some effort to upstream a certain amount of the kernel code coming from their chipset vendors (because Samsung uses entirely different chipsets for different markets, so it's more convenient to have fewer downstream patches).
There is nothing, and I mean nothing, to stop you from forward-porting any driver or device tree to a new kernel, except for the desire to do so. It would be nice if device vendors maintained trees continually rebased on Linus's tree , but I'm sure you know they don't see the point.
That only means that the matching kernel drivers can't be upstreamed, that doesn't mean you can't forward port the kernel drivers. The kernel drivers are open source.
I personally wouldn't settle for a system with a closed source userspace, but Brakenshire said that you're stuck with an old kernel, which is just not true.
The largest two components in 3D acceleration are the userspace shader compiler/GL library, and the display configuration code. The latter is in (usually) the kernel these days. They should have no particular issue forward porting the kernel side of the equation, but if they want full open source 3D acceleration, they are going to need to write the userspace part of the driver (or more likely, port Mesa to it).
As for WiFi, not sure. The drivers could be crappy in ways which don't cause problems on Android; or (rarely) could rely on a userspace component. Otherwise they just haven't gotten around to it.
Also, both of these problems would be specific to a given device (or at least a given chipset). Which device are you talking about that they are having these problems with?
As for Replicant specifically, they are only satisfied with free software, right? I don't think they would even want to forward port the graphics drivers, since unless there's already enough development on an open source driver to warrant upstreaming, there's probably not a workable open source userspace to match the driver anyway, so they wouldn't distribute it either way.
Probably the docking station is the only technical reason blocking older devices, and Samsung doesn't want their customers running Linux on Galaxy without the docking station.
You've got several choices for running a GNU/Linux container on Android/Linux already: Lil Debi for rooted devices, GNURoot for a full distro on non-rooted devices, and Termux for a quick-to-get-running and reliable CLI distro.
Just a side thought. But afaik in all of europe you do not void your warrant when you root your phone or otherwise brick it software sidely. Its usually written somewhere, but afaik not legally enforceable (because as you said, you bought the hardware).
Older devices are probably on much older kernels that most distros stopped using a long time ago. It's not really practical to try and bridge that gap.
They are only on old kernels because of proprietary kernel module blobs. Open source those and a thousand developers will line up to mainline them for device support.
I hope the CEO of Samsung comes out and says "You know what, we're canceling this project because saagarjha couldn't just take a nice thing at face value and had to complain. There, so now now Linux on the old galaxies or the latest galaxies. Enjoy, earthlings."
"I seriously doubt there's a technical reason why older Galaxy models can't support running Linux as well. I don't understand why it's so difficult for Android manufacturers to allow users to install whatever they want"
Can you do this (install arbitrary OS, etc.) on any of the google nexus devices ? I have never used one but my impression was that they had totally unlocked boot loaders, etc.
If the bootloader is unlocked / fastboot can write to the bootloader / etc you can put whatever code on your phone you want to run.
The problem is the hardware is undocumented and the shipped drivers are proprietary so you would need to reverse engineer support for everything from the chipset to the modem to the display adapter.
So you can go throw a Debian Arm image in place of a system partition on many Android phones... it just won't boot.
"So you can go throw a Debian Arm image in place of a system partition on many Android phones... it just won't boot."
Won't even boot ?
I would expect that peripherals and radios and so on would be out of reach due to hardware/drivers but my expectation was that you could boot and get serial comms somehow ... as unusable as that would be ...
That isn't true. It will actually boot. In fact, you will even have unaccelerated graphics. Depending on the phone, you will probably have working Bluetooth. WiFi will work if you copy the firmware image from the original Android installation. Touch screen will work perfectly fine. Power management and thermal management are going to be a lot of work. Graphics acceleration is unlikely
I already tried Emacs in Termux but I was too slow and did crash while trying to do some things.
Nowadays I simply use JuiceSSH and mosh to connect to my main computer. I would like to have Emacs in my smartphone even while I don't have mobile signal though.
This has exactly the same problem as Microsoft's WSL: they're conflating Linux with "distribution built around GNU userland".
I'm not a gnu zealot. I think rms has many faults. But a major company saying "now you can run Linux on android" or "now you can run Linux on Windows subsystem for Linux" is beyond stupid.
In the case of Samsung it's
Like saying you can make a sandwich out of a sandwich. In the case of Microsoft it's like saying you can make a real roast chicken out of a soy chicken.
I've always thought of WSL as "implementing the Linux kernel API on Windows so you can run software written for it." I don't think it's necessarily restricted to GNU it's just that's what most people use and it's what's included.
That is what it is - it's a compatibility layer to provide (a subset of) Linux kernel system calls on Windows, allowing the user to run ELF binaries as-is.
The problem is Microsoft's phrasing. They're calling the distros (eg ubuntu or suse) that can run on WSL sans Linux (as they use the NT kernel w/WSL) "Linux".
That's because they need the public and marketing name to be understood not by the tech deciders who understands what it is and how it works, but by their boss.
Said boss have very little idea what "kernel compatibility layer" or "running ELF binary as-is" means, but they do get "running linux".
The announcement that includes the name change (from bash on Windows of all things) is pages long.
They could have said "run a Linux Distribution" or "run Debian/RedHat/Centos".
This is aimed at technical people, who should either know or be able to understand that Linux is the name of the kernel and is explicitly not part of this - WSL is (claimed to be) free of any Linux source.
And now you've just understood why the gnu zealots (whom I'm not a part of either) insisted on the proper GNU/Linux naming, because they saw it coming and now all of it got mixed up in the single name "linux" which can mean anything and everything.
The number of things I've spent decades thinking "wow, rms/they're borderline crazy" only to end up with "oh okay, they had a point" is kind of humbling. DRM, encryption, naming, ...
Yet I'm still not in agreement with the GPL and keep thinking MIT or BSD are better. Maybe only to be proved wrong again in the future.
I always kind of got their point, but if I say "I used to run Debian Linux in a virtual machine, now I can just run Debian under Windows subsystem for Linux" there is no ambiguity, no one should be confused.
Nobody just runs Linux + the gnu environment compiled from source themselves. People use a distribution, which has a name.
To get fast VMs, you need the guest OS (in this case, Windows) to be built for the same CPU architecture, and preferably to have virtualization features on the CPU. Since you're not likely to get either, you'd have to emulate the CPU. QEMU can do it, but it'll be slow.
Ubuntu wanted to do this but it never went anywhere... and they didn't open source their code. What an absolute shame.
I really loved the idea of it. You had the whole Android / Java / Bionic runtime sharing the same kernel as your GNU stuff. They had a way that you could even access your Android stuff from within your desktop environment like apps, contacts, text messages, etc.
It worked on Galaxy S4, and Samsung were really close to shipping it on every S4. It never did ship though, and the project was abandoned. A couple of people who worked on it tried to push for open sourcing it, but I don't think it ever was open sourced.
You could hook it up to an external monitor with an MHL-HDMI cable. Of course the SoC wasn't as beefy as ones we have now, but it was pretty OK.
What I would like to do is use my phone as portable storage device and then plug it in to external processing unit and boot from my phone. The external processing unit could be desktop computer, laptop or tablet.
The reason? When sitting behind my desk at home or office I don't like to be limited with the mobile CPU. I have the required kWhs to power a proper CPU, GPU and run 64GB of memory. I also don't want to run separate computers on each location, because keeping these in sync (OS settings, applications, databases etc) is painful.
Technically we are almost there. We can put reasonably fast flash storage to the phone. USB-C should provide enough bandwidth. On OS software side we would need some work to make plugging in/out convenient. I don't want to do a full reboot every time I "unplug" the phone from desktop processing unit and move it somewhere else. As I move between processing units I would like to keep my apps open, maybe just doing a hibernate/sleep and then waking the system up connected to a different processing unit.
This solution means double spending on CPUs and memory, but desktop hardware is relatively cheap.
Even Apple, being in the forefront of USB C connectivity, is vastly lacking in robust USB C operations. Plug in/out of usbc cables causes dramatically high crash rate on the new MBPs
Ah so I am not the only one! My MacBook Pro was regularly crashing completely when I unplugged my dock while it was closed. I now always have to wake up my MacBook before I unplug the dock.
Google could easily enable this on their Chromebooks, that run Android apps. Especially the ARM versions should be able to run the same code and use the same apps.
> I don't want to do a full reboot every time I "unplug" the phone from desktop processing unit and move it somewhere else.
How do you expect this to work? You can’t hibernate an OS and swap out all the hardware and expect it to actually wake back up. You’re going to have to do a full boot.
That's true, but we're getting close: Linux ostensibly supports power-toggling and hot plugging CPUs, memory, and disks at this point. I think it's more focused on allowing VMs to be resized and replacing bad components in servers, but I know that some systems are already using some of this tech to improve power management and support better hibernation. I think there's still quite a ways to go to make graphical sessions robust enough to be suspended and then brought up on different hardware (I don't think things are presently set up to allow for changes to the pci controller and X sessions expect things to be pretty static, as evidenced by the hoops you need to jump through for switchable graphics), but we may get there if a major company ever decides to take an interest.
It’s probably solvable with enough resources. I don’t honestly know why anyone would fund this, though. The “let me plug my phone into random machines” crowd is pretty small. People who want roaming state are probably better served in general by cloud apps, which don’t require that the user carry a dedicated device to boot with.
When your kernel boots up, it grabs its machine-id and looks for a hibernation image to map into its memory. If it finds one, great, you have an "instant wake". If it doesn't, you boot as normal. Now imagine that your kernel tries to mount a specific device to `/hibernate` prior to looking for hibernate images. Upon hibernating, it writes its image to its machine-id. You could easily share the disk between two machines (even of different architectures) and keep two separate hiberante images on disk. You wouldn't be sharing the processes, but you could share your data.
With a sophisticated enough setup, you could probably even dual-install binaries (although this would be much involved with ELF where you must have separate binaries compared to something like Mach-O) to something like /usr/bin_x86_64/ and /usr/bin_arm64/ and then use your shell to select your path. This might work on a system level, but would certainly work on a per-install basis manually.
This seems like a very different end result than what was desired. All your in-flight app and system state is still unique to each machine you connect to.
Maintaining disk consistency would also be a nightmare as each machine is unaware that it’s sharing the physical disk with other machines and they’re all clobbering each other with conflicting writes. You “boot” on machine A and suddenly all the files it was reading/writing have changed unexpectedly.
Stick a hypervisor and (some sort of) NFS share in the mix to sidestep having to know about the layout of fs blocks on disk . Also, just extend the linux kernel ELF loader to know about fat-binaries to make the scheme work - there were proposed patches to accomplish this that were (unfairly) derided, but there's nothing fundamentally wrong with that proposal.
> You can’t hibernate an OS and swap out all the hardware and expect it to actually wake back up. You’re going to have to do a full boot.
Have you actually tried? You sound like you're used to Windows where if you change a single piece of hardware the whole OS goes to pieces and needs to be re-installed or go through a painful process of driver searching. Considering I can (and have) literally moved hard drives between vastly different generation machines (granted, they were the same architecture), I don't think it's that far fetched to imagine Linux might actually resume from a hibernate with no issues on different machines. I've got some spare time this weekend, and I might just have to test this theory. Even if it doesn't work out, my experience with swapping hardware in Linux leads me to believe it wouldn't be that hard to add support for resuming from hibernate on different hardware. Sounds like a good college senior CS project to me.
> Have you actually tried? You sound like you're used to Windows where if you change a single piece of hardware the whole OS goes to pieces and needs to be re-installed or go through a painful process of driver searching.
Haha I didn't come here to be a win fanboy, but have you ever actually done this? I build all my own machines and I have removed a windows boot drive from a system, replaced the motherboard and all major components, put the drive back in and booted successfully without making any changes at all. I've done this on NT, XP and 7. Have not yet done it on 10. Some things will definitely not work right, and you will for sure have to update some drivers, but in all likelihood it will boot. It helps if you uninstall the board-specific drivers (chipset, etc) prior to the teardown, but I have done it both ways.
Once ip network is setup, you can easily run nfs, samba, vnc over that network interface.
A few years back when I worked on AOSP project, I dislike the tools from Android. I put a "ubuntu for arm" rootfs in a subdirectory inside a cell phone platform, chroot to that subdir. I have full ubuntu environment inside a cell phone. xfce4-terminal, ddd, nfs, samba - ~40k ubuntu packages all available with a simple apt-get install command.
As long as you have enough RAM, CPU cores, storage space inside a cell phone, it is very easily to setup.
If you keep the vnc server running, you can plug and unplug that cell phone to any desktops, thin client and your can instantly log back into the same GUI environment.
Sounds rather nice, especially considering I've been installing various systems over the last few days and flashing images to a thumb drive gets old really fast.
Thanks for mentioning this, it'll probably save me a few hours in the future!
I miss my N900, I could run full fledged Debian on it. The Maemo OS also had Terminal app with full SSH client & server app. I used to SSH into my phone from my laptop just for kicks.
And then Nokia had to go and fuck it all up.
Maybe I'm the only one that is getting tired of smart phones? I feel like it just takes over every ones life sometimes. I'm ready to go back to a $30 nokia that just f'n works for half a decade, doesn't die every day due to battery usage, and in general... doesn't spy on my entire life story for the supposed purpose of better targeted ads.
I think there's some serious room back in the mobile market with the way these devices are going. Outrageously expensive disposables aren't sustainable forever. At some point people get tired of throwing $500 in the bin for the exact same thing they had last year, just so they can have a full day worth of battery charge again.
It's already mentioned in sub comments but I'd like to mention GNURoot Debian [0] in a top level comment.
It is regular Debian (full repository of unmodified packages unlike Termux), and unlike variety of other chroot based solutions it doesn't require root access (utilizes proot).
reply