Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Ask HN: What does an electrical engineer actually do at work? (b'') similar stories update story
140.0 points by johan_larson | karma 2662 | avg karma 3.14 2016-09-04 00:00:48+00:00 | hide | past | favorite | 118 comments

What's your daily work like? Do you spend all day at the keyboard working with CAD and sim tools, or do you work with actual hardware, too?

The reason I'm asking is that I'm a pure software engineer, and I sometimes wonder if I'd have been happier on the hardware side. But I've gotten some reports that these days the work isn't actually all that different.



view as:

Like software engineers, there are many different jobs for electrical engineers. Some are more design and simulation, some are testbench, some are more theory and down to the physics level. Others are support and sales. Larger companies might have more defined roles, whereas others do a bit of each.

I am an embedded software engineer at a company that does handheld electronics, and agree that both jobs are along spectrums. I spend a lot of time with the hardware engineers, and we all spend time testing things on our benches with signal generators and oscilloscopes as well as using computer tools to develop (software in my case and hardware in theirs).

If you want to work with some hardware, or on a test bench with lab equipment, you can always move over to embedded, as there are many jobs available; some even say it's less competitive.


> If you want to work with some hardware, or on a test bench with lab equipment, you can always move over to embedded, as there are many jobs available; some even say it's less competitive.

My experience has been that the embedded jobs are controlled by companies that are very picky about experience. I am an EE that has been out of the game (professionally) for four years. It's like pulling teeth to get anybody outside the Valley to talk to me about an embedded job.


Sleeping all night in night shift.

This made a lot more sense after I saw your profile:

> Working as shift engineer in power plant


Wow Homer Simpson's job does exist.

What subfield are u in?

I do a lot of drawing and sometimes I build panels for prototype machines, but mostly I spec out components, talk to vendors, and do paperwork. FWIW, I work at a small company that makes industrial meat processing and packaging equipment.

Typical day as an Electrical engineer at a autonomous vehicles company.

- Work on Schematics and layout for various different boards (usually only get time for one each day). - Go to meetings with multiple different engineering teams to make sure that cross-functional requirements are being met (usually the software and mechanical guys making demands of what they need which just means what they want to make their life easier by making the EE's suffer). - troubleshoot issues on the already built hardware and add the fixes to upcoming revisions. - Argue with purchasing department about the reason I need to get a certain type of testing equipment.

-A lot of component research and validation and meetings with vendors to determine if that components is right one.

and this is just the pure hardware tasks. There is always potential to have to deal with firmware issues as well.

I may spent half my day at my desk, at least an hour or two in a lab and the rest is for meetings.


God, meetings. So many meetings. Why? Why?!

A significant fraction of designing any complex system consists of getting the people who design different parts to agree on the interface between those parts.

That was a very high minded response to an attempt at commiserating.

Yeah. Sorry...

I do think, though, that it's possible for an organization to structure it's meetings so that they're a useful and productive part of the job, and not a thing to be dreaded in the least.

I've seen this done with varying degrees of success/failure.


I think a real problem is that meetings bloat in attendance, and that the amount of useful stuff you can get done is inversely proportional of the number of attendees. I think this is doubly true when you don't have good, clear leadership and agendas. This doesn't happen that often in my experience.

I feel like I go to a lot of meetings with a lot of people who are just there to be "in the loop" rather than actually contributing to the topic at hand.


Because our hive-mind technology is sadly underdeveloped. :(

If there were a Borg-a-Tron 5000 that we could all stick our heads in to let dev and QA and PM and mgt know what the rest of the team was thinking, I'd use it. But only during work hours.


How much time do you spend these days being chased by ISO 26262 requirements, and the like?

Oh God! Not the automotive requirements!

I'm an electrical engineer by trade, and do embedded systems design for a living. I do a lot of board placement and layout planning, plus writing test plans for verifying that what we build is good at scale. I also do a reasonable amount of debug, as that just comes with the territory. I'd say it's about 70% software time for planning and CAD work, 30% hands-on lab time. Note that it's not uniform - I might be in the lab for two months straight, then not go in for six.

Generally, "embedded systems" implies some programming as well, but I do little to none of that. I think I'm the exception rather than the rule in that sense.

If you wanna get simplistic about it, I spend half my time playing "connect the dots" in overpriced software, and the other half arguing with people about where I decided to put the lines.

On the one hand, there's a lot more standardization in the components I string together to make a working system. On the other hand, it can be just as hard to track down the source of a problem. You happen to have caught me at a point where I'm stuck trying to figure out the proper answer to a mysterious boot issue, and let me tell you what - it's about as frustrating as frustrating gets. You have to push yourself to think hard about what you haven't done, even when you've done everything by the book. ("The book" in this context is documentation, design guides, processor reference manuals - things that tell you what to do to get your system working.)

Point I'm trying to illustrate - don't think that hardware is any easier than software. It's very, very gratifying to see something real, that you can hold in your hands, work. Doubly so when people actually use your stuff and tell you how much they love it. (Yay, consumer electronics!)


> embedded systems

> there's a lot more standardization in the components I string together

Embedded Systems is just one discipline in EE. How many people design something like an lm317 deployed on embedded devices? EE tries to teach that, too, until it's all to complex not to specialize on one branch. Another discipline I was offered to specialize in was Motor Appliances and Power Amplifiers. Those don't have much to do with CS, except maybe for digital control circuits.


When I said "a lot more standardization", I had things like DRAM or PCIe in mind, which are spec'd by their relevant consortiums (JEDEC and PCI-SIG, respectively). As in, there's clear limits on the way to make it work, and if you meet those limits, it's fine.

I do a lot of POL planning for purposes of thermal management, so yeah. I'm definitely not that specialized.


> DRAM or PCIe

Well, that's also a better example for my point. the sppeds they achieve require some research


Maybe for a flyby topology, but I mostly work with point-to-point topologies. That's basically an exercise in good impedance control.

> It's very, very gratifying to see something real, that you can hold in your hands, work. Doubly so when people actually use your stuff and tell you how much they love it. (Yay, consumer electronics!)

This is why I quit my software job and started to study EE. I'm currently waiting to get my PCB back from Oshpark. It does worry me a bit that some people here describe their EE job as more boring and frustrating than software development.


I've heard people compare EE to building up tolerance in an addiction. As you gain experience, the thrills of discovery and problem solving end up being higher peaks, but the high is shorter and less frequent.

Somewhat related to the topic, but as time passes, electrical engineering becomes more and more about choosing which software remedy to apply to what once were pure hardware solutions.

And when that is not the case, when hardware is going to be built, the decisions about circuitry and methods are increasingly determined in advance with software.

Here's an example -- in my career as an electrical engineer (30 years ago) I designed any number of phase locked loops in circuitry. Now I design them with a keyboard and a computer monitor, [using methods like this](http://arachnoid.com/phase_locked_loop/). The new PLLs work much, much better than the hardware-based ones, as well as requiring far less guesswork and effort.

So my advice is not to abandon a software-based approach, it increases your employability compared to someone who only knows hardware.


I've served a whole bunch of roles in my five years at my current employer, and I'd probably classify myself as being an "electrical engineer" for all of them. (No single experience is typical, but mine is probably representative of a lot of chip companies.)

When I joined the company, the very first role I was put in was on the core bringup team for a complex ASIC -- that is to say, the team responsible for screening chips, and working on issues that affect all of the individual functional blocks. I joined about a month or two before silicon was to come back. So, I spent the first six or eight months or so at the company in the lab; the first month was spent familiarizing myself with the tools and boards that we'd be using, and then once silicon came back, I spent a bunch of long nights and weekends in the lab getting chips to various teams, and, in general, solving whatever system-wide problems showed up. Bringup was a lot of work but it was also a really good view into "how the sausage was made", so to speak.

After bringup, I moved to an IP team [1], where my title was "ASIC Engineer". At the phase in the project that we were in, most of the RTL [2] had already been written, and owners for each sub-block had already been assigned. So my job was to do a bunch of the "checklist" items for netlist quality. For instance, I spent a while reviewing test coverage, and waiving coverage for things that couldn't possibly ever be reached. Or I reviewed tool output that did "clock domain crossing verification" -- basically, the tool pattern-matched on various chunks of code to make sure that they were safe. And yes, I spent some time staring at waveforms, trying to debug our testbench, or any kinds of such things.

I spent a while on another couple bringups, which I volunteered for this time. I enjoyed them, and then gave myself some time off for each to compensate myself for my nights and weekends.

At some point, someone decided that I was a better architect than engineer, which is probably for the better, because I was very slow at checklist items. So at some point I switched to an architect role, which meant that I was responsible for doing the definitions of sub-units, rather than implementing the hardware for them or implementing the testbenches for them. And, in general, the whole specification process is part of the architecture team's job. So, one day, when there was an output quality problem with our block -- it worked as specified, but given that it did some image processing, the image quality had some defect, so the specification was wrong -- I was tasked with spending a few weeks to reproduce it on hardware, find register settings that made it better or worse, and finally, understand what the defect was in the specification, and how to avoid it in the future.

Another task I had as an architect was to do the definition for a sub-unit from the ground up. This was a year or two of work. My primary output, interestingly, was not code, but instead a 100-or-so page Word document that specified how the block was to work, and what registers should program it; the consumers of that document would be the hardware team that implemented it, and the software team that would build the software. And, subsequently, I was tasked with implementing a model of that block in C, which could be checked against the RTL that the design team wrote. Near the end of that project, I wrote validation tests for it, and yes, I then spent some time staring at waveforms helping the design team to understand why their RTL implementation diverged from my C model. (They were, often, right. I am very lucky to work with an extremely skilled RTL team.)

These days, I'm doing more algorithmic research, trying to figure out what should be next for the block that I'm working on. In parallel, I sometimes get on phone calls with, for instance, image sensor vendors, understanding on an electrical level what's going on inside of their next sensors, and how they will be transmitting data back to our processor. So even though I work in the digital domain a lot of the time, having a firm grounding in 'is it possible to wire this to this' has gone a long way to help out, and being handy with a soldering iron has made my life a lot better on more than one occasion.

My experience spans some gamut, but not all of it. I don't work on place-and-route, and I don't work on board design (at work, at least). There are a lot of things that electrical engineers do :-)

Hope this helps. (I can answer questions, I suppose, if you like.)

[1] For some reason, the semiconductor industry calls functional blocks IPs -- yes, as in 'intellectual property'. This particular IP was not something that we licensed to anyone, or that we licensed from anyone; the only 'customer' of this IP was our own chip team.

[2] Again, another acronym whose expansion ("Register Transfer Level") is not super descriptive. Essentially, source code. Usually in Verilog, or an even higher level language. EEs seem to love Perl and Tcl, so most places I've worked have had Perl or Tcl preprocessors before their Verilog. Ugh.


No VHDL? ( :) ) Also, TCL - WHRYYYY..

ASIC work is fascinating.


Because TCL was one of the first great scripting languages, so we glommed onto it and it got used everywhere. You would be surprised (and then maybe appalled) at the number of times TCL appears in a tool chain like that.

I know. That was intended as more just "ugh, more TCL", then "why more TCL?"

There's a lot of overlap between the two, or there can be; there is as much variation in what one does in EE as you can imagine, from supervising operations to semiconductor design, etc.

A design engineer tends to have a batch-oriented life. Typical workflow might be like CAD -> sim -> layout -> debug -> small scale production -> testing -> handoff to production. Of course all of this is as a member of a larger team. Somewhere in there you'll either work on software or firmware or both. I spend about half my time in the lab or field, the rest in my office or meetings.

One bad side of EE is that you can break stuff much differently from software. When the magic smoke comes out, there's no 'svn revert' - you have to figure out what you broke and fix it before you can move on. This always happens when you're in a rush and causes plenty of unplanned late nights. And for additional fun, it's not uncommon for problems to crop up where you just have no way to get at the underlying issue. Datasheets don't have all the info you need, and you can't always figure it out. Sometimes you hit a wall and just have to start over. I used an Atmel processor which had a weird bug in its I2C slave module which prevented it from working properly. Best solution ended up being to go to a different processor, which was incredibly painful.

It's really awesome to be able to hold on to a thing that you built and make it go. Seeing your thing go out in the field and work is very rewarding.


How are your troubleshooting skills?

I'd say my EEs spend 50% in design and 50% in debugging customer problems when a board fails in manufacturing or fails in the field.

The answers can be easy (say, a resistor is out of tolerance or the wrong oscillator was placed on the board), or they can be really tough (a transient is killing a FET and locking things up).


I was a RF engineer two years ago, and now I am doing software development. I think I kind of resonate with many of the above comments. While I was doing RF design, I spend a lot of time on CAD tools(I agree many of them are overpriced), after I got my design from the fabs or manufactures, I spend most of my time on lab to validate/optimize my designs, due to that fact that some unforeseeable fab/manufacture processes variation that are not captured by the CAD tools. thing I feel very gratifying was if you utilize the right framework/methods that are reason upon physics, the performance of the design most likely will mach to what you predicted, and that allow you to design something reach/surpass some industry benchmarks

I have been working for about 20 years. At the beginning of my career I had a choice of being in the ASIC side or board design and chose for board design because I thought (rightfully so, I learned later), that that was a more "physical" task. So I worked for about 10 years in board design and my typical work day was not typical at all. Some of the time researching parts. Some making schematics of circuits. Some making PCB layout. Some on debug and integration. Some time mastering scope and logic analyzer use. Some time talking my way thru all the people I have to receive products from/deliver to: SW engineers, FPGA engineers, mechanical engineers...

I made analog designs, digital designs, power supply designs... Then I started feeling more and more that I designed less and spent more time learning what the chip designers did, as chips incorporated more and more functions that once I did by myself.

So at the end I switched to FPGA design and today my typical workday is coding, debugging, simulating... 95% of the time on my computer. A good friend of mine coined a good phrase: FPGA engineers are SW engineers that disguise themselves as HW engineers. A good joke. But every joke has a bit of truth in it.


It seems like board design is a much less monotonous and more practical task that you seemed to enjoy. What made you change to FPGA? Do you feel that the bulk of what used to be board design is now happening inside silicon?

I started my career as a software developer, but got bored by web development that seems to make up 90% of the industry, so I started to study EE because I really enjoy playing prototyping fun projects. I just ordered a PCB for one of my projects.

But when I read your, and other comments, it seems that just as the things I enjoy about programming are not what a typical job is about, the things I enjoy about EE are not what typical EE jobs are about. I'm not sure where that leaves me...


Yeps, as I said, I felt that I was doing less design and more system integration or "lego building". Besides, I had been doing that for many years and I thought a change was due. From my description it may seem that FPGA is monotonous... that was not my intention at all. For me, FPGA design is much more challenging and fun than board design never was. The downside of FPGA design is that human contact is much more limited.

You cannot really receive answers for your questions, because most of these things are subjective. You have to make your own way and make your own experience. Experience, after all, is the comb you get when you have no more hair ^^


This year, I left my electrical engineering (EE) studies for software engineering. I had a lot of personal experience with coding before going in EE and I wondered if some hardware would make my life more interesting.

After 1.25 years, I realized that I was spending all my life in a lab with no windows, to the smell of toxic soldering fumes, fighting against extremely annoying software (altium & other proprietary overpriced pieces of technical debt). Also, I realized whatever I created needed a lot of work for an unsatisfying outcome (A sound amplifier is less satisfying than some webgl stuff that moves [but it requires a lot more work]). The nice feeling of being powerful when writing software & the instant compiler/interpreter feedback is what I missed the most.

At least, when you are a software engineer, you have more odds of finding a nice workplace, with windows and software you can choose.

The part I miss from electrical engineering is the physics part (But we were only skimming this part anyway).

Finally, circuits are a thousand times less satisfying to me than code. I've seen people for whom it was the opposite. They did not get programming at all, but they were designing circuits at the speed of light with an intuition that I did not have.


I like Altium and solder fumes.

I think the people who coded Altium liked solder fumes... A bit too much.

If you think Altium is a "overpriced piece of technical debt", HOLY SHIT you should try anything it competes with. It's sooo much nicer then anything else out there in the EDA market. At my work they use Proteus, and it's horrible.

Also, solder fumes aren't toxic!


I only tried Altium in "depth" (And some open source shallowly). It is surprising to me that competing products are even worse.

If you think Altium is bad, you've never used Orcad.

If you think Orcad is good, you've never used Eagle.


If you think eagle is good, you've never used KiCad.

Seriously, I love KiCAD. So much better than eagle, and open source.

I'd love a recommendation for a good KiCad intro tutorial. I tried to just hack through learning it once and I couldn't pick it up. Any suggestions?

I used a YouTube playlist[0] to get the basics down, goes from installing KiCad through to soldering a finished board.

[0]https://www.youtube.com/playlist?list=PLy2022BX6Eso532xqrUxD...


Thank you! I'll check this out!

Edit: ...is that Chris Gammell? How did I not know this was a thing he made?


He became something of a huge KiCad evangelist there a while back. I've heard that this list and Contextual Electronics are pretty good. I think he still works for parts.io too.

For home use, I use Diptrace. Bad name, maybe, but excellent software and the price is right.

What's the learning curve on newer tools for someone who has been using EAGLE for years? I'm scared of change.

Tell me if I got this right. Altium is better then Orcad, and Eagle is better then Orcad.

So which one is better between Altium and Eagle?


No comparison, Altium is much better than Eagle, especially for professional work. I haven't used Orcad in a long time, but back then, Orcad was much better than Altium. Nowadays, Orcad is priced beyond mortals, Altium is relatively affordable, and Eagle is cheap. Kicad is free and considered equivalent to Eagle.

> Altium is relatively affordable, and Eagle is cheap

For reference Altium is still around $7500 plus $1500 per year for updates and Eagle without board restrictions is $1600.


Whoops, I meant to imply that Orcad is better than Eagle.

I really just find it hard to do hobby projects in Eagle because I'm so used to Orcad's workflow by now. All the shortcuts are different. :(


Somewhat amazing that when I left hardware design more than 20 years ago we were using Orcad (Eagle existed too iirr).

Depends on if you use leaded solder or not (and really, I appreciate why people do - unleaded solder at home as a hobbyist has been a nightmare for me so far).

It's the flux in solder fumes that are toxic, you shouldn't be vaporising the lead. A well ventilated room and a small fan to blow across your work area should be enough for most hobby work.

That being said, you should wash your hands after dealing with leaded solder and consider some disposable gloves if you're going to be doing a long session.


There is no lead in lead solder fumes (look up the vapour pressure tables. seriously).

The only thing in lead solder fumes is the rosin, which is benign except being an asthma sensitizer, so you should probably take precautions if you have asthma. Other then that it's basically harmless.


> Also, solder fumes aren't toxic!

If the solder has lead the fumes are toxic. If the solder has a rosin flux the fumes are toxic, and the operator needs ventilation to keep exposure below recommended levels.

It's a bad idea to be breathing in the fumes, but obviously this depends on how much you're breathing. At most risk are people who solder all day everyday.


> If the solder has lead the fumes are toxic.

I encourage you to support this statement. Lead doesn't vaporize at anywhere close to soldering temperatures, and from what I have read there is no lead component to soldering fumes and consequently no exposure through respiration.

Rosin and particularly No-Clean fluxes do pose a respiratory hazard, so fume extraction is a good idea regardless.


Lead in the fumes is still solid particulate matter, but that's sill not something you want in you. It's not likely to be a major issue, but it's not completely safe either.

holding a pencil in one hand that has a tip at 650F isn't completely safe either.

EE & programming skills combined are superb if you are into robotics, embedded stuff, physical products in general. I do front-end web programming for living but spend my freetime in my "EE lab", self-taughting¹ myself to get out of "yet another React app" limbo.

¹) I don't expect to find a real EE job without a degree, more like a generalist who also has clue about electronics.


Yeah I have been doing software for a long time and a couple years ago I started getting into embedded software and loving it. I have found some embedded developer positions I could get into without an EE degree but would have to take a cut in pay. If that's something you can afford to do, that's probably your best bet. I can't afford to take the cut but some people are better with savings than me lol

I think that's an unfair comparison. The reason for EE is to in the end ship products, often many, that works a long time. You can (to some extent) disregard good practices, barely read the datasheets and cobble things together in EE too, but that isn't the default. Make a novel enough amplifier and it can be relevant for decades, while technologies like webgl are often emerging until they are obsolete.

It seems to me, that amplifier design is so slow moving, that the argument is not really in its favor. However, the abstract point makes sense, because only talking about low hanging fruit is misleading. From some higher vantage point, I guess, the fields are too different to be comparable at all, up to the point where it all might come together through maths.

Computer Engineering at the intersection of EE and CS is all of both or nothing really, depending on your point of view. A respected man once said, every idiot can count to one ;) But then again, digital and analog with enough precision seem to be the same thing from different perspectives. So, I'm not sure whether it makes sense to look for differences between those.


My job is split between software/firmware development (mostly C in Linux on Cortex-A series or in a RTOS on Cortex-M micro controllers) and hardware development (mostly embedded systems, digital control, motor driving, etc., working with RF engineers also).

While I started out doing software and love it, for me electronics is just a whole next-level of awesomeness... Altium is actually in many ways a really good piece of software (yes, there are also some annoyances, but they are minor compared to any other electrical design software). It's true that it's stupid expensive but given the other options it is worth the price to us at least.

But I get to work on really cool stuff (mostly satellite tracking antenna systems), and our labs are on the side of our building with a full-height window down the entire length, so your milage may vary.

I actually don't get the solder fumes either, because we have techs who are much better at reworking tiny surface mount parts than engineers tend to be!


Good answer. Although I'm actually kind of in the opposite boat.

I've been doing software dev for a little over 10 years now and recently I have been leaning more towards electrical engineering. Partly because I loved electronics before I knew how to code, but the main reason is that I have yet to find a software company that can make thought out decisions on languages, frameworks, listen to their devs, etc. Most companies I've worked at usually just pick the most common stuff and go with it because they either don't care or aren't actually the right fit for their position, only to realize waay too late that the software wasn't the best choice. And usually us devs get the blame because we are the magic people that can "make anything work".

The other part to this is that we always voice our concerns early on, but they fall on deaf ears because it is too much of a hassle for our decision makers to make more decisions, weigh pros and cons, etc. It gets really frustrating knowing a project is going to have issues down the road but nobody listens until it happens. If you were to look back on our slack chats at the beginning of most projects you would see many devs predicting the issues with almost perfect accuracy.

Where as, (imo without real world experience mind you), EE is fairly straight forward because there are only schematics and circuits and tangible progress. No hours long meetings on which of the hundreds of frameworks will guarantee success, PMs that don't really know much about proper project management, owners that think of themselves as the next google even though us dev grunts know that isn't true, etc.

I'm probably letting my frustrations guide this reply a little too much, but what i think im trying to say is that the grass always seems to be greener on the other side even if it's not.


I honestly cannot understand why anyone would call Altium an "overpriced piece of technical debt." It's an extremely powerful piece of software, and fantastically easy to use (especially compared to some of the crap out there, cough OrCADcough). Perfect? Of course not, but it's not bad.

I'm a SW guy .. been mulling getting a job in hw at some point. I watch a lot of YouTube and do hw projects in my spare time. Is there something like sw boot camps for developing hw skills? I'm almost considering getting an ee bachelors or masters as a part time student.

I got this in my inbox a few days. An Oakland, CA coworking space has an "Engineering Accelerator"[1] that they're doing on nights and weekends to help people do hw engineering. Not sure if it's 100% what you're looking for.

[1] http://www.engineeraccelerator.com/


depend on what hw skill that you try to acquire, things like fabrication/device level design, antenna RF design/measurement etc, you have to go to university, because many of these topic require some very expensive equipments to gain the hand on exp, but not necessary truth for like something like digital circuit design

Not really, because hardware is not amenable to boot-camp style things.

Basically, hardware is way /way/ harder then software. Imagine software where each compile took multiple weeks, and cost lots of money (hundreds of dollars for the simplest of projects, to tens or even hundreds of thousands for really complex things). Changing the layout or design almost always involves a complete board respin.

There is no easy getting started (maybe arduino stuff, but there is SO MUCH horrible misinformation in the arduino communities), because the minimum functional system is a complete system. You basically have to get everything right the first time in hardware, or it won't work. The learning curve is more of a learning wall.


You're right that you couldn't do a 'boot camp' style short course and land an EE job, but it's not a hard as you make out if you set your sights lower.

To give one example, let's say you want to start out by making guitar pedals. There's a healthy market for boutique guitar pedals, plenty of existing designs available online that you can modify when starting out, and when you're ready to start making designs from scratch you're looking at stuff you can understand with entry level electronics knowledge. Furthermore, there's room to grow as your knowledge improves (can get into DSPs, for example).


Can you specify what SO MUCH HORROR amounts to in the arduino community?

From reviewing these posts, it seems like EE spend most of their day frustrated trying to get the device to simply boot or do the simple thing. If there's a tool for beginners that snaps together and avoids these headaches, that seems valuable in itself, almost anti-HORROR.


My issue with the arduino community basically comes down to:

1. The HORRIBLE code quality in the entire arduino codebase.

2. The prevalence of straight up factual errors everywhere in their stuff.

3. Their shitty, SHITTY schematics.

To be clear, getting a AVR to run a "flash the LED" program is trivial. There is really not much /to/ an arduino (it's an overglorified atmel eval board).

When you refer to `trying to get the device to simply boot or do the simple thing`, what you're normally talking about is trying to get a fully integrated complicated system to power up and function to the point that it can then be debugged further. This is a COMPLETELY different ball-of-wax, and requires a much more broad toolset, because you have to be able to diagnose why it's not working. Sure, with arduino things, you can plug the bits together, and it'll MAYBE work (assuming non-buggy libraries, which is very much not usually true). OTOH, if it doesn't work, ¯\_(?)_/¯. You're going to need a logic analyser, or an oscilloscope (or both!) to figure out why. I can pretty much guarantee the person who initially wrote the library for each of your "snap together" module components did have to go though all that.

Basically, arduino stuff is the PHP-equivalent of the hardware world. The general quality is low, there is little push for improvement, and everyone is sticking buggy crap on the internet.

For example, the "Arduino Mega" (call it a goddamn ATmega2560, dammit) has multiple hardware serial ports (4 of them). For a long time (possibly until today), even if you weren't using the ports, it allocated two 64 or 32 byte buffers for each port statically. This is on a platform with 8KB of RAM total.

Would it be trivial to fix this? Yep. Did they? Not the last time I checked.

These are also the same people who decided malloc() on a platform with 2 KB of ram was a good idea (it's not).


Thanks for the reply, I'm just seeing this. I'd agree with you that anyone who knows what they're doing and is looking to deliver a professional product will find the Arduino, it's price, it's libraries, unsuitable.

Think of it like physics problem sets though. You don't get to Physics 503 by just reading the first three text books, you have to actually do the problem sets and struggle with them. In our first problem here, assume there is no friction, even though any experimental physicists will tell you that's crazy, that never happens. Once you've mastered the exercises, then you criticize their clunkiness and move into a framework with more realistic assumptions.

And for me, the refusal to fix sub-optimal design is pretty appreciated as it doesn't subtly break some aspect of already written books, tutorials, shields, etc. Beginners will often look to copy very literally an already known solution, which may be on someone's blog from 2011. Little changes, which happen often with RasPi, result in hundreds of comments like 'This doesn't work, please help!'


I'm just a beginner. I write in hardware description languages for test benches, simulations, checkers and constrained random tests. I've also done PCB layout (hobbyist to RF microstrip), GUI programming (I don't enjoy that), and using IC design software for.simulation and layout. I also spend a lot of time working with hardware. The debugging process is very challenging, and certain aspects of it can be tedious. Overall I find it to be satisfying, challging work. I also get to apply my Linux knowledge most of the time when working on things, which is great.

Just a note that if you go into automotive, you will spend 90% of your time in Matlab/Simulink. Pretty fun IMO.

Can you elaborate on that? What do you use Matlab/Simulink for in the automotive industry?

Enable wireless hacking gateways for cars ?

Think it depends on the specific job in the industry, but think the author was referring internal combustion engine control systems / ECU development.

There are a bunch of EEs in here complaining about the software they have to use. I took a few EE courses in school and the only thing I remember is having to use horrible software.

Why hasn't someone made good software for EEs to use? And if the have, why do no EEs use it?


In my opinion,

1) Barrier to entry is way too high compared to other software niches. Good luck designing a cycle-accurate CPU simulator, or creating a reliable and robust analog circuit simulator that competes with offerings from EDA giants like Cadence and Synopsys.

2) The adoption process. I learn tool X in school, I start a company that uses tool X, I teach my hires tool X, etc. Once tool X becomes a central part of the design process, it's almost impossible to get rid of it.


There is lots of good software for EE, it just costs $$.

I worked on maintaining this software quite a few years back. It was quite innovative when first released. http://solutionselectricalsoftware.com/

Most of my professional experience has been with Orcad, and the folks who write that tend to write it for people who pay good money for it. Generally, those people tend to be in the semiconductor industry. As a result, there are a ton of features in the Cadence suite that are highly optimized for that industry. That doesn't make them easy for laymen to use. It just makes it easier for the people who are keeping Cadence in business.

That's my read on it, anyway. Anyone who figures out a way to make EE CAD software better and easier to use stands to do well for themselves.


Not a big enough market to make money making a better product. The tooling is super complicated and there are a limited number of potential users.

Many reasons for lots of legacy software that just gets updates and refreshes. Someone mentioned the small size of the market, which I think is the main reason. Originally, these programs were designed and written by EEs themselves, who weren't really experts at SW. Today's software has moved towards user friendliness and onto the web. The EE tools are mostly based on older windows platforms or Sun workstations that ran primitive x-windows environments which looked like windows 3.1.

The best stuff is usually pretty expensive. Although Altium is making lower cost and free versions available with their CircuitMaker series.


That's ironic. Every time I've worked with an EE (as a software engineer), they've been among the best developers on the team.

Been doing RF/Microwave/Antenna design for 20 years. Fortunately I have stayed hands-on with hardware development. Start with simulation, then PCB layout, followed by lots of measurements in the lab. Sometimes meetings for customer requirements, writing technical parts of proposals, getting capital equipment purchased, etc.

Hey what's with that 'path loss' equation? Looks sketchy to me.

Looks sketchy in what way?

I'd say it is sketchy as people say path loss increases with frequency; well it doesn't if you have the same physical antenna apertures (think parabolic dish). Though the formula is called a loss since it's used that way from a circuit perspective, and you treat the antenna gains as not related to aperture, so loss does then increase with frequency.

Yes that frequency term is precisely the sketchy bit. Because of that the equation doesn't conserve energy which is normally considered a bad sign.

On the other hand to do it properly you have throw out the 'reciprocity' of sending and receiving antennas. They are not really the same at all.


There's a lot in EE that's nothing to do with hardware per se. Like digital communications and signal processing.

EEs also study computer architecture and OS internals in detail, depending on one's emphasis.


I do FPGAs and hardware design. The FPGA part of my work is more like software I guess; bug fixes, feature enhancements, maintenance, etc. Hardware design comes in waves depending on how complex the design is and how the requirements change. When it's "design season" I spend most of my time in schematic and PCB layout, then wait for hardware to come back, then do bringup. I also get to dabble a bit in RF/microwave and DSP.

But as others have said, EE jobs run the gamut. Depending on where you go to school and what emphasis you choose, you could be doing digital IC design (which contains multitudes within itself, e.g. high-level architecture, clock trees, verification, integration), analog IC design, RFIC design, board-level RF, chip package design, embedded systems, power electronics, antennas, FPGA, HDL/IP cores, test engineering, digital signal processing, control theory, communications systems -- the list goes on and on.


I've lived two careers -- one in semiconductors doing ASIC physical design (highly aided by writing software I might add), and another in startups doing backend database, systems, the gamut of server-side programming.

The hardware job seemed more like "real" engineering -- it was more rigorous, there was less room for error or experimenting. But, I got bored with it eventually -- it was the same thing, just a bigger chip, more people working on it, and a longer design cycle from one thing to the next. Also, it seemed like upward mobility was hard -- I didn't want to be a middle manager anyway, and it felt hard to have a large influence in a company with thousands of engineers. In the end, I felt I didn't want to die at that desk, so I switched into working for internet startups.

The software end of things has felt more creative and has definitely been more fun. It's more laid-back, people are generally a little more interesting and well-rounded, and you typically work at a place 2-4 years and then move onto something new -- which I like. You're generally working in smaller teams and you get to work on a variety of different pieces of the system if that's what you like to do.

All in all, they were both rewarding experiences with good compensation -- the software thing might end up being more lucrative in the end and I just sorta have more fun doing it. At the time I made the switch, I took a pretty large pay cut going from a mid-level hardware engineer to a junior software engineer, but it's one of the best decisions I ever made looking back.


This video is an amazing time lapse of the creation of a Eurorack synth module. It shows you what you'll spend a lot of time doing in a smaller org (smaller, as in, you do everything...). https://www.youtube.com/watch?v=1pawXfoTg1k

Wow that was mesmerizing. I'm surprised that all of the traces were drawn manually. I had always assumed that modern design software would do all of that tedious stuff for you.

The creator of that video was answering questions on reddit, and amazingly the whole design and build process took less than a week.

https://www.reddit.com/r/ArtisanVideos/comments/50tqgx/the_d...


So EE today is almost entirely about digital circuitry?

I'd guess others are less likely to read HN.

I know people that design electric motors, microphones for hearing aids, overhead electrification for railways, railway signalling systems and power distribution in very large buildings.

However, I don't know what a typical day would be for any of them.


TL;DR; I'm an impoverished small scale serial arsonist of expensive components remembering that software means never having to leave physical proof of your boo boos, most of the time. OK, I'm actually a software engineer back into hardware engineering after 20 some years cause I need to build custom cameras for machine vision projects. Major recovered sills are CAD (Eagle), more geometry (layout), exercising credit cards due to I'm still bad a spice, lazily using standard cap and resistor values even when they're wrong for >this< circuit, and excessive shining about stray capacitance - since I need to work at 3.3V and I'm in my 50's, I can't see the parts so well and my motor skills aren't what they once were, so thats even more expensive crap I need (rework station, reflow oven)

Having just deployed a machine vision system (using GigE machine vision cameras): why exactly are you building custom cameras? If you don't mind/are allowed to share this information, of course.

I do hardware engineering because I started my career when it and heavy math were required to get into software courses (and you first computer came as a bag of parts) - the camera itself is spherical build, wedging 26 cameras, 4 beagleboneblacks, a netswitch,IMUs,GPS, and batteries for the souls sucking multiamp drain into a 5" ball (OK, I gave up on the batteries for the moment :-) My primary "actual engineering" has been the control distribution circuits across 8 paired I2C/SPI busses at 4-8MHz and trying not to corrupt the bits in the process (a little bit of whining on Wolfram Alpha with the theoreticals, a lot of being very neat and tidy with the layout - the power distro itself with its various internal taps because power is analog and analog is proof that God hates us, the various passives and actives involved in everybody actually powering up in some more or less controlled fashion, etc - technically for the camera I use the Arducam FPGA against OV2640's and OV5640s, cause that gives me a nice (slower) pure I2C/SPI interface - the real goal is to feed the tensorflow stuff in the cloud, the camera geometry, imu, and gps provide enough correlations for the system to lock onto pretty easily, i.e. stationary spherical image reconstruction and spatial analysis is a no brainer (relatively speaking :-) ), and the back end system is now getting more betterer at using the IMU data stream as a primary assist when the camera is in motion (oh, and then there was the pesky little issue of delivering the GPS clock to 4 linux based computers to establish a common timebase cause we all know that linux is an awesome RTOS)

I am a board level design engineer and the work is similar in many ways to software. It is very numerical and requires attention to detail.

I spend 25% of the time talking about requirements to other engineers, 50% comparing components to use with datasheets, 12% in Altium Designer and 12% buying components and PCBs.

Huge pain #1: the unbelievably slow process of manufacturing PCBs. Imagine that you were back in the old days of computing where you had to use punch cards with machine code and you had to give your stack of punch cards (your "program”) to the punch card operator. He would run it overnight and you would get your results the next day. If your program failed, then you have to meticulously comb through it and debug in your head. This is modern-day PCBs. Holes in a board that take forever to make. Then you have to pay $80 shipping to get them next day or you can pay your engineers to sit around doing nothing. And the PCB might not even work. Learning feedback loop is very slow.

If you want to do anything remotely interesting like via-in-pad or four-layer boards, first you have to wait at least 24 hours for a custom quote. Half of PCB vendors don’t just give you the formula to make your own quote. Then you have to pay either $1000 and 2 days or $200 and 2 weeks to have 5 copies of your design.

Huge pain #2: reinventing the wheel. When I open a datasheet, I have to read about the device. The pins, the maximum ratings, the application note “gotchas” like “don’t leave this pin floating or else the chip will be unpredictable!” Then I have to make the schematic symbol and footprint by hand. That means manually entering IPC package dimensions into Altium like a braindead zombie. Every package is slightly different. I cannot tell you how many Texas Instruments DC-DC converters I have hooked up. I have no idea why device manufacturers don't just hire someone full-time to make open-source 2D and 3D footprints for the top six CAD tools. SnapEDA and Altium Vault are trying to do this, but the footprints are flawed and they are outright missing a lot of parts. I cannot tolerate mistakes when each board costs hundreds of dollars. The device manufacturers already make footprints to test their parts. Why don’t they share them??

Huge pain #3: High barrier to entry. Very expensive software. In software engineering, professional tools like git, Visual Studio, Eclipse are all free. You can pull code at home from Git and start contributing immediately. The only barrier to entry is the time you need to understand the existing codebase. Even in firmware you can download Code Composer Studio or PSoC Creator for free.

In board design, you need to pay $300 for EAGLE or $5000+500/year for Altium if you’re serious. Sometimes OrCAD goes on sale for $300. Lets say you want to simulate Bitcoin mining ICs frying themselves in their own heat. Or maybe you want to know the radiation pattern of your antenna. You can pay another four digit price tag for simulation software like CST or just copy a previous design like a zombie. Upverter is trying to solve the upfront cost problem with their $125/month SaaS subscription pricing, but I tried their editor 3 months ago and it was 15 FPS with the example board. Not cool. KiCAD is an open source alternative to Altium but as far as I know it is nowhere near comparable.

Huge pain #4: ordering components and PCBs. In my last project I had to order components from China. Ordering from China is not very easy with the language barrier, bad spec sheets, time difference. Alibaba is the place to go for ordering from China, but all the prices are “Contact us”, which means you have to give all your information blah blah blah until you get an email with the price and then pay with wire transfer. Sometimes you get lucky and you can find what you want on AliExpress and pay with credit card.

But the tradeoff to all of this is that if I do it correctly, I can hold something in my hand and give the software engineers a new API to play with. The APIs all stem from the hardware. The work is often more fundamental with equations and physics rather than purposeless corner cases I had to consider when I was in programming. And hardware often has the chance to be featured on the box of a product rather than software which is all just assumed to work. It feels more meaningful.


How small is your company that you have to order your own components and boards? We only have 45 people in our company but even at our small size we have dedicated purchasing people to do that for us.

My company has thousands of employees but I am on an experimental division that only has 13 people in the office. Three of them are entry level engineers. Everyone else is in another time zone. The company is not about electronics so I am not aware of any purchasing people I can use. Even if I could, I am not at the stage when I want to produce tens of thousands of boards yet.

Yann LeCunn is in fact an Electrical Engineer!

Is it math-heavy in practice?

Software engineers usually have to take quite a few courses in calculus, linear algebra, and stats. But that stuff very rarely comes up in practice, at least in most subfields.


If you're doing signals stuff, it can get mathy (differential equations or Fourier analysis type stuff).

Legal | privacy