Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
An Introduction to C and GUI Programming (www.raspberrypi.org) similar stories update story
137 points by AlexeyBrin | karma 9334 | avg karma 9.57 2019-04-24 08:44:32 | hide | past | favorite | 73 comments



view as:


C for GUI programming? GTK2 for starting a new project today? I think new learners would be much better off with Python 3 and GTK3 (or Qt5)...

Or any other arbitrary |L|x|F|x|P| combinations of languages, frameworks, and platforms. But, this one didn't have a book yet.

I disagree. There is always value in learning something simple and master it than learning something complex and barely scratching the surface.

It teaches you fundamentals, especially on an embedded system such as R-Pi.

The first language I learned was C on an ARM processor. Learning about registers, bit manipulation, seeing how compiler optimizes the code based on GCC flags, understanding pointers, memory management, the whole thing is difficult for a beginner but to me, it was immensely helpful. I also love Python/Julia/Java/JavaScript but man it makes you appreciate neat things higher languages offer.

One day I want to build my own 8-bit computer from discrete gates. I’m sure it won’t be a waste of time.


The embedded environment is what makes this setup make sense. But in the general case, C is less than ideal for GUI programming compared to what's available in other languages, because of lack of first class closures and more complex data types (or at least classes). Otherwise you are basically in the 80s doing Win32 GUI programming all over again.

What do first-class closures have to do with anything?

Being able to pass a closure (lexically scoped anonymous function) as an event handler is invaluable in GUI programming.

So you think C can't do that? Of course it can. Anything you want to create, C can handle it just as well if not better.

C doesn't support closures natively, so have fun building it up yourself. You can pass a function pointer. That's it.

My point is, that in C, you can do anything, including creating closures and passing them however you wish.

With enough coding, you can "do anything" in any Turing complete language. This is meaningless. It doesn't mean it is advisable (simple, maintainable) to do so. C does not support a syntax for closures, so you'll really just be calling functions with an 'environment' arg or something similar. What's the point?

My point is that he claims C can't do closures or pass them either. Follow the thread.

As far as being turing complete, CSS is supposedly that, also, so good luck with that.


When people say a language supports closures, they are generally talking about syntax. C does not support a convenient syntax for closures, period.

Here's a guy who developed a closure "library" for C: https://nullprogram.com/blog/2017/01/08/

It's clever, but he actually had to resort to assembly...


You think everyone not using C is just being idiots?

I'm doing GUI programming for many years (from the DOS days) and while closures can be neat for small stuff (like updating an object field and calling some sort of refresh function for a visualization of the object when a GUI widget that represents the field changes), they are certainly not invaluable and you can do GUI programming perfectly fine without them - just pass a method pointer (obj+method) for C++ and similar languages (e.g. "procedure of object" or "function of object" in Object Pascal) or just a function address that takes an extra pointer (or whatever) argument that is given during handler registration (e.g. in a C toolkit of mine you register stuff with "register(widget, event, handler, param)" and handler is defined like "handler(widget, eventinfo, param)").

Actually, IMO if you are using closures for events that are more than 5 or so statements (i'd say lines but people love to pack their lines with multiple statements), you are doing a disservice for every poor soul who will have to understand your code later.


There's a school of thought which considers complex data types to be more a part of problem then a solution. You may disagree with it, but nevertheless one certainly doesn't need complex data types to write GUI (or anything) effectively.

Also, in this universe Win32 API was released in 90s, not 80s.


The complexity should be the same no matter what language you use, but how you structure and organize that complexity changes drastically. Hence in Ruby it takes only 2-3 lines of code to accomplish what in Java requires a few ItemFactories.

#MandelaEffect

> Win32 API was released in 90s, not 80s.

Yeah, well, kinda... technically you are correct but Win32 is 99.998% backwards compatible (as an API, not ABI) with Win16 which was first released in the 80s.


I agree with you if you are talking about Python vs C.

But a new GTK2 book sounds like teaching Python 2 for beginners.


The author does touch on this on page 76 of the book (quoted below)

GTK 2 and GTK 3

One thing worth mentioning at this point is that there are two versions of GTK in use. The current version is GTK 3, and this is still under active development – it changes quite a bit between releases.

For this reason, many people prefer to use the older version, GTK 2. This offers most of the same functionality, but it is stable code which doesn’t change very much any more. Some say that it is always better to use the latest and greatest version of anything, but more cautious old engineers (like your author…) tend to prefer the older version that has had a lot more testing and with which people have stopped fiddling about! In terms of the examples we are looking at here, there aren’t that many significant differences between the two, but just to be clear, the examples in this book will use GTK 2.


GTK+ 3 is stabilized now. GTK 4 sees active development, and hopefully a first release this year.

The only problem I have with GTK3 is that the quality of the documentation I've found is...less than ideal. Broken links and extremely terse descriptions of parameters to calls are a common complaint for me.

Which, like Gtk 3 to Gtk 2, is also backwards incompatible so using Gtk 3 also wont be future proof (and Gtk4 wont be either, for that matter, unless the Gtk developers stop breaking their library every major version and decide to provide a stable API people can target - but considering they cant even decide on a stable way to make themes, i have little hope for that).

GTK2.0.0 was released 17 years ago. GTK3.0.0 was released 7 years ago. GTK2.24 got its last patch release 2.24.32 about 1 year ago.

10 years being the mainline version, and having a supported compatible version for 16 years is pretty darn impressive! If GTK4.0.0 comes out next year, then GTK3 will have had 8 years being the mainline version. Its not quite win32 levels of stability, but it is not like they break this stuff every year or something...


Thing is, just last month i wanted to play an older game that for some reason was linked against Gtk1. And pretty much every application made with Lazarus today is linked against Gtk2 (as the Gtk3 backend is still in a very prealpha state) and i'm sure there are other similar situations. And as you can see with this book, people are still working with Gtk2.

At this point it doesn't matter that it was released 17 years ago, what matters is that there are programs using it.

Honestly, i can see Gtk1 to Gtk2 breaking compatibility since they probably did some blunders and all and it wasn't very long lived anyway, but Gtk2 to Gtk3 should have kept backwards compatibility - GUI libraries are important foundation for a platform and breaking changes breaks every application that relies on them. Instead what the developers did was not only break Gtk3 but not even try to design an API with Gtk3 that would allow Gtk4 to be backwards compatible.

It is frustrating if as a developer you want to ensure your program keeps working in the future and have the foundation it relies on sabotage you. I can release a Win32 application today and i'm certain that if people are still using x86 Windows desktops, it will work in 20 years. Hell, chances are it'll still work in Linux thanks to Wine - whereas the only way i'd get the same sort of compatibility with a "native" UI library would be to write my own toolkit and use X11 directly (and even that assuming Wayland developers wont have convinced everyone to drop X11).

Sometimes i'm just considering to just target Win32 for my stuff and tell people use Wine for Linux and macOS, it'll be more likely to work in the future than any native UI library.


If the GTK1 based game had bundled its dependencies, it would have worked today as well. I think better solutions for that is the way going forward, not requiring devs to maintain compatibility forever.

Expecting every application to bundle all shared libraries it depends on is absurd, it invalidates all the benefits shared libraries provide - like them being shared with other programs, having shared updates and of course new features. As an example, if Gtk3 was backwards compatible with Gtk1, the game not only would work but the UI would be a native client for Wayland despite the game being released a decade before Wayland was a thought at anyone's mind. And to stick with games, you can see another good example why bundling shared libraries is a bad idea: old games that use SDL often need their SDL copy replaced with a new one to work since the old one is using methods that in the meanwhile were invalidated (e.g. using OSS instead of ALSA).

Outside of games, for software that touches the Internet you probably also want libraries to get shared security updates.

And of course having to bundle 50+MB of libraries and data (themes, etc needed by a GUI toolkit and any other data by other libraries) for a small executable is a ridiculous idea by itself.

No, a desktop system should provide some minimum functionality that desktop applications can rely on and keep that functionality stable so that application can keep on working.


I'm only expecting applications that don't update their code in 10+ years to do so.

Systems like Flatpak have a framework system where it is possible to have a shared runtime of dependencies that multiple applications can use. A GTK1 runtime could exist (or maybe does?), for example.


Yes, Flatpak sounds like it solves the compatibility problem (although it remains to be seen in practice what issues itself might have) but it still is half the solution: a Gtk1 could exist, but since that Gtk1 runtime is still the abandoned Gtk1 libraries (which were abandoned because Gtk2, Gtk3 and now Gtk4 decided to break backwards compatibility), it doesn't provide any new functionality and/or fixes to programs linked against it. In contrast in Windows, since Win32 is both backwards compatible and continues to get developed, if for example i use a program from Windows 95 that uses the common file dialog, i still get to see the overlay icons from my VCS extension or the compressed file status. On the other hand Gtk1 would still show the old Motif-like open dialog that was written back in the late 90s.

Or perhaps more fitting to modern Linux desktop, Gtk1 and Gtk2 will not support Wayland nor any new functionality introduced for scaling (did you know that, despite common narrative from Wayland fanboys, X11 already provides the necessary functionality to implement per-monitor and even per-window scaling with pretty much 99% of the same code that a toolkit would need for Windows 8+ scaling, that is most likely already written? It only needs some cooperation from the window manager and this sort of code could even be implemented for XWayland.... but it needs a bit of modification from the toolkit's side too and while it could be done for Qt5 and perhaps Gtk3, it'd need Gtk2 and especially Gtk1 to be worked on, which wont happen - on the other hand, if Gtk2/3/4 were backwards compatible, all programs would benefit from such new functionality).


https://www.nand2tetris.org/copy-of-cool-stuff

This is definitely not a waste of time. But though building a physical ALU from TTL is feasible, constructing a reasonable amount of memory would be a waste of time and money.

Once you have written the nand2tetris HDL for the hardware, you could translate it to Verilog and burn an FPGA quite easily.


This is the coolest thing ever, thank you!

This is exactly what I wanted to learn. I really wanted to 'see' C in action at lowest level. This aspect of programming is what I am curious about.

But I never found a decent beginner friendly resources.

Anyone reading this comment, please chime in with your suggestions.


If you want to see C at it's lowest level and understand where it shines, I'd recommend the embedded and bare-metal realms. This Raspberry Pi tutorial set, for instance:

https://github.com/bztsrc/raspi3-tutorial

It's also fairly popular for emulators, since it's types are orthogonal and it's low level design is well suited to designing a pseudo-machine. It's definitely being displaced in that realm by C++/Rust/Nim/etc though.


Thank you very much!!!

If I may add my two cents... I highly recommend reading through the standard library and then looking at how they are implemented in actual C. The string library is probably the most important, this is where most people mess up!

If you can figure out pointers and how to actually get the size of things (sizeof(*char) doesn't get you the size of what it is pointing at... it gets you the size of a pointer which varies depending on the architecture. sizeof(myArray) doesn't give you the number of elements in an array, it gives you the number of bytes your array can hold, to get the number of elements you must do sizeof(myarray)/sizeof(myarray[0]). sizeof() is a compile-time operator, it knows nothing of runtime) you will be pretty much 99% of the way there.

https://en.cppreference.com/w/c/string/byte

https://github.com/lattera/glibc/tree/master/string


Its possible you may have come across this but if you havent already it speaks to what you just said https://www.evanmiller.org/you-cant-dig-upwards.html

Hasn’t it always been the case that the current generation skips learning about useless lower layers used by older generations? How could it be that the layers you grew up with and learned are the special ones which are still useful, when all the layers which came before, which people at the time thought essential, have turned out not to be so?

I.e. why is it that C and assembler is “fundamental”, but electronics is not?

(I have written about this before: https://news.ycombinator.com/item?id=19106922#19113359 )


I mean when I went to university we absolutely went over the electronics side of things. Certainly not to the same depth as an electrical engineer would have, but it was covered.

Lol you really went all in on the sunk cost fallacy there huh

I'm an electronics engineer who started to learn programming Assembler on PIC16 micro-controllers. My arguments is not against learning the fundamentals. I just know from teaching that the primary problem people have when learning is that they get stuck and stop. Making GUIs in Python (given good educational resources, like a book) will give results much faster, and more quickly become a practical skillset for GUI programming and other useful tasks.

For learning C and starting to poke at low-level embedded stuff I would recommend starting with Arduino (and then into plain AVR) - and not go anywhere near a GUI toolkit.

There is a reason we don't actually throw kids into the deep end when teaching them to swim. Most of them would just drown...


I think if I were wanting to bang out a simple embedded design with a GUI, this is exactly the direction I'd go. It isn't like you usually need super complex behaviors and I'd hate to add in extra toolkits or a big framework just to get a few buttons on a screen.

I've never dealt with Raspberry Pi world, but I can see the value of stripping the software world on one to the bare minimum. I wonder if someone has put together a FreeRTOS tool chain for it.


you're ignoring the audience and the platform. The audience is people whose only computing device is the Raspberry Pi. Compiling a Qt app on most of the rpi models would take forever. The lower-end models are pretty resource constrained. Most of the rpi software is written in GTK, so it's a primer on using the desktop software toolkit that is used to make the applications that the user is using.

jononor should have recommended Gtk for Python via https://pygobject.readthedocs.io/en/latest/. Not sure why PyQt was suggested instead.

I did not suggest compiling Qt apps (or doing C++). You can build Qt apps in pure Python using the libraries installed via the package manager.

As a low level developer, i thoroughly disagree. I think it's disappointing that we're given absolutely insane amounts of processing power where intel keep pushing the limits of physics, then developers keep wasting that on inefficient languages like javascript, python, java, etc. They have their uses, but using them to develop every piece of software would be an inefficient waste of resources.

For fast, efficient software there will always be low level languages such as C, C++, etc. and I would always urge more people to learn them, even if just to learn to appreciate how much is managed by their high level language and understand the trade-offs they're making with those languages.


Rendering a GUI is much more overhead than the business logic to set up the GUI and make it interactive. A GUI is only "slow" if there is a user-triggered operation that takes more than 10ms. Interpreting a Python callback is many orders of magnitude smaller than this, and complex algorithms can be outsourced to other languages if needed. Focus on your bottlenecks first. Don't pre-optimize.

> Don't pre-optimize.

Don't pessimistically de-optimize either. It's harder to remove bottlenecks from mature applications than it is to plan ahead for the target platform and common use cases.

By all means, prototype in Python but if your target platform is, at minimum, a R-Pi 2 and you want the experience to be snappy even when their is high IO or CPU work being done... then don't release on Python; use the prototype to drive the implementation in C.

Maybe consider even compiling from a higher-level language into a C library to minimize the boiler-plate.

Planning goes a long way, even if you don't plan everything up front.


>It's harder to remove bottlenecks from mature applications than it is to plan ahead for the target platform and common use cases.

I disagree. It's easier to release first and change what you need later, since there might only be 1-10 sections of code that need to be changed. Python is extremely fast to write, maybe 3x faster than C. This means you can get seed funding and sales faster, and if the idea is good enough to get a decent salary, you can just hire out optimization for a v2 release. It also allows your application to appear to get better over time, which makes long-term customers happy.


For a typical SV B2C web application, sure, but we're talking about developing applications for a low-power device.

Knowing your target platform and your end goals can aid in planning. Planning your architecture ahead of time can save you from having to optimize later... and often optimizing later is harder than simply doing the easy, right thing first.


"Low power" or "slow CPU" has nothing to do with it. When you have a bottleneck (GTK, X11/Wayland, video drivers) that is 100x more significant than the binding language (Python), switching languages will only buy you 1% speedup at best. I'm not saying to go forth unplanned. I'm saying that using C over Python in this case is a waste of time.

This is not pre-optimisation. If you start writing your code using javascript, any development on top will continue to be javascript. Until you end up with a full project of javascript and you can't move to C because doing so would mean re-writing an entire project.

> developers keep wasting that on inefficient languages like javascript, python, java, etc

> For fast, efficient software there will always be low level languages such as C, C++, etc.

this sounds like every "low level developer" ever, in that the opinions are completely removed from reality. it isn't 19<whatever>. if by inefficient, you mean slow, then java isn't anywhere near comparable to javascript or python. beyond being a completely different type of language, java is fast. so is c# and f#. so is chez scheme and sml. so are plenty of much higher-level languages than c and safer languages than c++ that are just as fast (if not faster in some cases dependent upon development time and application and developer) than c and c++.

also, c++ is low-level? it is an insanely complex high-level language.


> this sounds like every "low level developer" ever, in that the opinions are completely removed from reality. it isn't 19<whatever>.

Who cares what year it is. Fact is, with low level languages, you can write software which consumes less resources. This won't ever change.

> if by inefficient, you mean slow, then java isn't anywhere near comparable to javascript or python.

Sure, I (and presumably the poster you responded to) agree. Java is plenty fast for most things. But its memory usage is horrendous. Like seriously bad. It seems Intellij takes a solid 1.5gb to simply exist. I'd like to use Java (well, JVM based languages really) more, but basically all JVMs are memory hogs that take too long to start up. Graal might help here, we shall see.

C#, F#, Chez, et al. follow a similar suit. (Chez is probably the best of that lot, though. Would be my choice).


> Fact is, with low level languages, you can write software which consumes less resources. This won't ever change.

and you'll be much more susceptible to bugs and schedule elongation and robustness issues. that won't ever change either.


> I think it's disappointing that we're given absolutely insane amounts of processing power where intel keep pushing the limits of physics, then developers keep wasting that on inefficient languages like javascript, python, java, etc.

It's called "economics". C and C++ are simply too cumbersome for most applications; most applications don't benefit from that speed (either because the deadlines aren't that critical or because the bottleneck is I/O) and their project deadlines can't afford the velocity tax they impose. Especially when many of those languages allow you to opt out of "slow" features (or FFI into C/C++/etc) for your hot path. Lastly, higher-level languages are improving: Go is probably the easiest language out there and it's almost as fast as C (in the same ballpark, but still slower) and Rust is far, far friendlier than C or C++ but it performs on par.


Not disagreeing but it's interesting that you'd lump Java in with Javascript and Python. Java certainly has its problems but it's considerably faster and more efficient than both.

I'm an embedded C developer. I've not followed Java in years but as far as I knew it was slow, bloated, and full of security holes. I'm very happy to be corrected on that though

He explains his reasoning both on page 76 of the free to download book and in the parallel complaint thread on the linked article.

His reasoning is that GTK2 is stable and GTK3 changes with each iteration.


It is impossible to use the GTK3 or GTK2 bindings for any language without understanding the C interface. Usually there is no real documentation on the bindings.

I would actually pick Vala over anything else for GTK3 applications. It’s really pleasant to work with and compiles to optimized C code, so you get serious performance wins over Python (and type safety to boot)

Vala is nice, but is really niche at the time being. I think people would be better served using Python 3 and QT for maximum ease, documentation/help, and cross-platform abilities.

Vala really is a remarkable engineering achievement that is highly underrated. I can only imagine what could be possible if Vala had mindshare similar to Python.

I'm not sure either make sense. So much of GUI programming is learning a particular toolkit. Especially in the case of Qt where it's an entire _runtime_--memory management, string types, threading, async/signaling, etc etc. This isn't even mentioning the hoops you have to jump through to compile a Qt program (if you're using C/C++).

These toolkits are inevitably on their way out. The future is the browser stack, loathe though I am to accept it.

As a tangent, it wouldn't be horrible if something like a browser was used as a window manager and UI runtime a la ChromeOS except with apps running in containers (and no bastardized Linux kernel and the restrictions that come with). This would have lots of benefits--the whole frontend benefits from all of the investment going into the web (including wasm) and the whole backend benefits from the investment in containers. Notably, packaging / dependency management, process management, log management, etc would all be much simpler/easier.


Immediate mode gui frameworks make for a pleasant programming experience

https://github.com/vurtun/nuklear


Whats the benefit of immediate mode libs? I haven't had the opportunity to use one before. I'm hopeful for libui in the future, but maybe I should consider immediate mode libraries?

https://github.com/andlabs/libui


No callbacks, so no forced heap allocation nor lifetime management . "oh you want to use FooGUI? First let me tell you about the object hierarchy and object lifecycle". No mvc. No custom, required data structures.

The programmer just copies and pastes from the demos, makes small modifications, and it just works.

(No offense intended towards your framework, I've used gtk, qt, and swing and have never understood them like I understand nuklear and dearimgui)


For a raspberry pi which is unlikely to be plugged into a monitor it seems like a better gui for most usecases would be a simple webserver.

Raspberry Pi was made from the beginning to be used as a cheap desktop computer that people can play around with and students can buy with their own money, pretty much like the BBC micros and ZX Spectrums in the 80s. From day one it came with a full Linux desktop system and was meant to be used by itself as a proper computer.

It's worth remembering the core mission for the Pi was to be an affordable educational computer. In that context, this publication is entirely on-point for the foundation.

There's an almost symbiotic relationship going on where the foundation doesn't exactly ignore the hobbyist or industrial communities - far from it - but they are still holding to their core mission, and these notable, but ultimately ancillary communities provide the budget for it.


The author speaks highly of the book "C for Yourself". An electronic version of the book from the Microsoft Programmer’s Library 1.3 CD-ROM has been posted [0] on the truly excellent PCjs Machines [1] website, where a lot of other older programming information can also be read [2].

[0] https://www.pcjs.org/pubs/pc/reference/microsoft/mspl13/c/c4...

[1] https://www.pcjs.org/

[2] https://www.pcjs.org/pubs/


Whoa, this little book just arrived in my mailbox in my Bulgarian village apartment. Can't wait to crack it open.

Legal | privacy