Sounds like 80% is a theoretical prediction for this method, not an experimental result:
Naik said adding the emitters to standard solar cells could boost their efficiency from the current peak of about 22%. “By squeezing all the wasted thermal energy into a small spectral region, we can turn it into electricity very efficiently,” he said. “The theoretical prediction is that we can get 80% efficiency.”
I am especially infuriated by how it repeats a common misconception of infrared light as the epitome of "heat", as if it was some kind of separate particle.
I'm not infuriated, but having teachers and other adults conflate IR with heat certainly added to my confusion about the physical world when I was a child.
Sorry for a potentially stupid question: why do they need to convert heat into visible light? Why can't we convert heat directly into electricity at the same time as we convert visible light into electricity?
"Light" here is a misnomer. What they're doing is taking the wide array of infrared wavelengths and channeling them into a single wavelength. It's easier to build a solar cell that's tuned to that wavelength.
I don't know why they're calling that "light". It's not clear to me if it means visible light. The paper seems to imply that it peaks in the near-infrared. They appear to be using "light" in the general sense of "photons"... but it's all photons.
They're talking about very high temperatures, the kind that emit significant visible light as well.
You're confused – heat isn't being converted into anything. The article conflates the words "heat" and "infrared radiation", which are not at all the same thing.
As a rule, everything around you glows – in other words, it emits photons. Some infrared ones, some visible ones, some ultraviolet (or even higher-frequency) ones. How much of each depends on the temperature of the material. Most things (e.g. you) are too cold to be visibly glowing, but they still emit plenty of infrared radiation.
Now, standard solar cells convert visible photons into electricity. Any incoming infrared photons, however, aren't strong enough to kick any electrons loose, so they go to waste. Here, the researchers used carbon nanotubes to absorb the wasted infrared photons. This heats up the nanotubes, which then emit visible photons, which then increase electricity generation.
There is no way to convert heat to electricity directly, unfortunately. However, you can get small amounts of electricity out of temperature differences: https://en.wikipedia.org/wiki/Thermoelectric_effect
This heats up the nanotubes, which then emit visible photons, which then increase electricity generation
Once a solar cell is built with this technology, do you think there will be enough light emitted by the nanotubes to make the solar panels appear to glow, themselves? Because that would be pretty interesting.
If the solar panels appear glowing, it's because visible light is being dispersed away from the solar panel itself. The consequence of that would be wasted photons that could otherwise be converted into electricity.
So, from a pragmatic perspective, I would say that no, they wouldn't glow - proper engineering would ensure these particles are collected.
The idea is to direct all of the nanotube-emitted light onto the PV cells, so hopefully you won't see anything. Citing the Rice article:
> Because electrons in nanotubes can only travel in one direction, the aligned films are metallic in that direction while insulating in the perpendicular direction, an effect Naik called hyperbolic dispersion. Thermal photons can strike the film from any direction, but can only leave via one.
>This heats up the nanotubes, which then emit visible photons
I'm a bit skeptical that that would work. Otherwise you could build a perpetual motion machine by having the nanotubes in a hot room and running your perpetual motor off a solar cell powered by the light.
My guess is when the article says emit light they mean emit infra red light / radiation rather than visible. The article doesn't really make sense to me - I imagine someone got confused by the physics rather than it being an actual breakthrough in energy generation. I note the article says "propose a way to build" rather than having actually built something that works.
The article doesn't provide very much detail, but it's absolutely possible to absorb multiple low-energy photons and emit fewer higher-energy ones, see e.g. https://en.wikipedia.org/wiki/Second-harmonic_generation – and apparently carbon nanotubes can do this as well.
> This heats up the nanotubes, which then emit visible photons
Critically these photons are not heating up the material. It’s like saying photons reflecting off a mirror heat it. No, the photons that are not reflected do heat the mirror, but not the reflected ones. (At least mostly.)
"There is no way to convert heat to electricity directly, unfortunately."
Well, if you have enough heat, you can boil water and then use the steam to turn a turbine... I suppose in some sense this is dependent on a temperature difference, but I wouldn't call the amount of electricity we get by this method 'small.'
Hot things glow. Medium hot things (like you) emit lots of infrared light. Really hot things (like lightbulbs) emit infrared and visible light. Ridiculously hot things (like the sun) emit infrared, visible light, and higher-energy radiation (UV etc.).
All physical objects emit electromagnetic radiation corresponding to their temperature (as opposed to reflected, reaction-induced, or stimulated radiation, more below), with a corresponding blackbody radiation curve. Note that blackbody emissions don't have a specific frequency (say, unlike laser light), though the curve has a peak emission.
Traditional incandescent light bulbs are blackbody emitters, as is a hot coal, the Sun and other stars, or the glowing elements of an electric radiant heater. Human eyes see blackbody radiation, in the visible range, as having a characteristic colour, which paradoxically gets bluer as the temperature gets higher, so the ruddy tones of low-voltage incandescent lamps are low colour temperature, and the intense white of halogen lamps are high colour temperatures. Blackbody radiation and colour temperatures are given in Kelvin, with typical visible light corresponding roughly to ~3,000 - 8,000K. You might also recognise these values from gamma or colour correction values on monitors and video equipment.
(Colour temperatures were used to judge processing for ceramics and metal processing / firing / smelting processes as well. They're also used to classify stellar temperatures, and in conjunction with brightness and/or distance estimates can be used to find stellar sizes.)
Infrared light is characteristic of peak blackbody emissions of bodies at or near "room temperature". So IR is not heat, but is associated with hot objects. Heat itself is ... thermal energy, which is its own complex thing.
Since all EMR carries energy, all EMR heats objects in which it is absorbed. A sufficiently intense emission above the IR range will also heat an object. However hot objects in near proximity transfer a great deal of thermal energy directly as IR emissions. The illuminating capability of "white" light (typical solar emissions) allows a small amout of EMR energy to provide a high degree of visual information without imparting much heat on the illuminated objects (though it does impart some). Consider that even an (inefficient) 100W incandescent bulb, whilst hot to the touch, does not significantly heat the objects it is illuminating, and a far more efficient equivalent LED lamp, drawing about 15W of electrical power, accomplishes the same illuminating power with vastly less heat. We don't have to toast things to be able to see them, just bounce a small amount of (high-energy) visible light off of them.
Not all light (or more accurately, EMR) is blackbody radiation. There can be chemically-emitted light, directly from chemical reactions. The blue glow of methane (natural gas) combustion is a chemical emission, contrasted with the yellow-white glow of a candle, actually blackbody emissions of suspended soot particles in the smoke plume. (There's a blue region of chemical emission at the base of the flame.)
Chemical emissions are based on specific frequencies of EMR emitted as electrons transition between energy states, if I understand correctly are a quantum phenomenon. Fluourescent and LED lamps also work based on chemical / valance emissions, and operate in far narrower bands than blackbody emissions -- one of the reasons these lamps can appear harsher than incandescents. Similarly various chemical lamps, especially high- or low-pressure sodium vapour, formerly popular as street lighting, which emit in narrow bands (low-pressure especially).
Some forms of ionizing radiation are also EMR emissions, at far higher energy levels, triggered by nuclear rather than electron emissions. Typically gamma-rays.
And stimulated emissions such as lasers and masers are ... another phenomenon I understand only poorly, but are also tuned to very tight frequencies. Radio and microwave emitters are somewhat similar.
While this is a very interesting and ingenious development, what's needed for greater deployment are lower costs per unit of energy. Increasing efficiency will reduce the cost of land, but land isn't typically a huge factor in solar deployments. In this NREL analysis, land acquisition gets put into "Other Soft Costs" along with permitting, inspection, interconnection, sales tax, engineering, procurement, construction, developer overhead, and net profit:
I certainly didn't mean to discourage this at all!
But I think it's important to point out that even though people often disparage solar photovoltaic for having low "efficiency," that metric is not an impediment to its broad deployment or great utility to us.
Sure it is. Lower efficiency means you need more panels, larger panels, and more support infrastructure. It limits where it can be deployed and makes it less competitive.
And the need for more/larger panels and support structures will only be a win if overall, the costs decrease.
The cost is what makes the decision for deployment, not the efficiency. Perhaps this efficiency makes 2-axis tracking economical enough to justify, and then it gets deployed, but in the end the efficiency wasn't as important as the improved costs.
I'm no expert, but if the panels now generate additional energy/money that offsets the cost of adding these nano-tubes, doesn't that mean you can meet your power needs with less hardware, or generate extra money to sell back to the grid? It seems like what matters is whether these nanotubes generate more value than they cost.
To use a Computer Science paradigm. Performance (Efficiency) is not the end goal, but performance (efficiency) acts like currencies so that you can 'buy' other things from it.
Yes, and since installation of home solar panels is a small fraction of the total installation cost, reducing the overall size of the project could save money even if the panels themselves are a lot more expensive. https://earthtechling.com/solar-panel-cost/ estimates they are only 15% of the total cost of a home solar power system.
Not really. The difference in the land cost is effectively negligible. At this point (and it has been this way for many years), the limit in solar deployments is cost of panels and storage limitations.
There are vast chunks of empty land in AZ, NV, NM, TX, CA that could easily fit enough solar panels (each) to power all of the US. The issue is cost of panels and no base load supply.
Isn't this pretty different for utility-style solar installations versus the rest? For the former, land cost is clearly a variable. But especially for home PV, it seems like land cost is fixed.
There are multiple models (markets) even in today's world, and they have different needs and constraints. When you have lots of space but not a lot of money, you probably optimize for least money per unit of energy. When you have money but limited space (skyscraper rooftop, spacecraft), you might want the most energy per unit of area. And there are times when you need a secondary non-grid solution for redundant backup, so reliability matters more than other factors.
Different technologies with different pros/cons could be best for different markets.
If you could multiply the efficiency of a given solar cell by 4, all else being equal, you would also reduce the manufacturing cost by 75%.
From other comments, I am guessing this result doesn't give anything like that but one should keep in mind greater efficiency certainly could give great payoffs.
I'd expect most of the costs scales linearly with the number of panels: land, installation, maintenance, interconnect, so this should translate pretty directly to cost savings.
The primary issue with everything in this category is that plenty of things are possible in the lab or in theory which just isn't possible when applied in reality and at scale.
Blame journalists for hyping this because of the climate change focus.
I am involved in actively looking for better materials and better sources of energy. The reality is that there are no fundamental breakthroughs since oil and nuclear.
No matter what you hear we just haven't had anything that fundamentally changes the game.
So instead of actual breakthroughs in the fundamentals, we get marketing, branding, and communication. But this is not something that can be fixed with that, it's physics, not product innovation
We will most likely get fusion (far out) fuel cells (even further out) before we get any fundamental breakthroughs here.
Why do you consider cheap solar not a breakthrough? If I 30-year mortgaged some solar cells, I could put them on my house right now and save $20 a month on net because of reduced electricity costs
Well, in the grand scheme of things, photovoltaic cells were developed back when nuclear reactors began to appear as well. They are not a new technology, just something that is getting optimized diligently now. In that regard, the parent post is technically correct. But that view is focused too much on big shifts in the market, if you ask me. Smaller, more incremental steps have added up to amazing things around wind and solar power.
Incremental improvement and economic forces do matter. Solar is now cheaper than coal in many climates, and we are seeing the effects.
Of course, there are still many problems to be solved. But we're also working on incrementally on solutions to those. For example, battery cost is falling precipitously. "grid scale battery" was not a thing a decade ago, now you see installations popping up regularly.
I'd like to see cold fusion as much as the next guy, but dismissing incremental progress is foolish.
Unfortunately it's also a huge petroleum and natural gas producer. About half of the energy used in the state is for industrial use, much of it for the oil and gas industries. On the bright side, natural gas is displacing coal in the meantime while solar and wind are not replacing it fast enough. It needs to get better, but things could be much worse.
That depends on where you are. I asked around a bit, and solar is a net negative for me (cost-wise). The only option one company gave me required converting the roof of my garage to a flat surface (from a standard peak), which is utterly impractical around here.
And you can use them to power your watch perfectly fine too.
That doesn't mean it will work at scale for society which is the primary issue.
The key thing to look for is energy density as that will be more likely to give society a bang for the buck.
Distributing solar cells out with the capacity factor of solar isn't economically feasible for society and neither possible as a main source of energy.
Nuclear and Hydro to some extent Thermal none of them are in vogue.
These are arbitrary metrics, why does energy density matter for anywhere which has the space? Even in Northern latitudes and high density countries you can generate a lot of electricity with a nominal amount of land. All the more if they can be fitted on rooftops. For sunny places near deserts it can absolutely provide a big percentage, even a dominant percentage of power. The main issue is matching demand to supply, production density is a distant concern.
Energy density isn't arbitrary. It normally means something can be used in compact form (batteries, oil, gasoline, coal, uranium) and deliver energy at will.
Look at the capacity factor of wind and solar, add to that the huge areas they need and the fact that they are intermittent and you start to get a glimpse of the problem.
This is neither economically nor technically feasible. You can't generate as much as you think and you can't do it at a cost that makes it feasible generally for society not even with lowered cost.
Currently, we are talking 1% of world energy consumption not expected to be much more than 3-4% in 2040 and that's despite huge investments and all the political goodwill you can ask for [1] Keep in mind that the numbers you normally see displayed is for electricity not for energy. Electricity is only a subset of energy.
Furthermore, investments in solar and wind is decreasing especially when you take china out of the equation. [2]
And again. Lab results or theoretically possible advantages most of the times aren't feasible in reality and at scale.
Why is it not feasible? I run 32 panels on my roof, in the pacific northwest which is known for its rain, it covers 100% of my residential and electric car usage when considered on an annual basis. If roofs required solar panels we would have a large part of power generation done at a much lower intallation cost and could focus on large scale energy storage projects instead. My payoff period is 11-12 years, system is warranted to 20. No breakthrough required.
That's right. All my neighbors use the excess solar power during the summer, and during the winter I use more grid power than I create. I get credits in the summer and use them up during the winter.
A giant leap in solar efficiency doesn't solve the storage problem, and the storage problem doesn't need a magic physics solution, just large scale heads down engineering. My point is: solar is efficient enough now, and a sound economic choice. I make enough power in a year just with my roof for my house and car. I do need a storage solution to do power leveling, which is what I use the grid for now.
That's where we disagree. You think power storage will take a magic physics solution, and I think it is normal engineering (pumped storage, rail/gravity potential generators, utility scale battery banks for leveling).
I'm not making claims about the actual merits of fuel cells, mind you. But claiming they're further out than fusion is bizarre. Some people are driving them around today. Nobody is generating net power from fusion today.
I am talking about the kind of fuel cells which can be used by wind and solar to store energy and distributed economically feasible.
I don't know of anything like that either. Small fuel cells rely on expensive precious metal catalysts and large ones have too much capacity to be a good match for the distributed generation sector.
A technology that is commercially available but too expensive at present for large scale adoption, like fuel cells, is more mature than a technology that hasn't been demonstrated even in a cost-is-no-object context. That's why it's strange to hear fuel cells described as even further out than fusion power.
In the past decade, (2010 -> 2019) the area efficiency of commercial crystalline solar modules has improved from 14% to 20%. This is modules, not cells. That's an increase of more than 40%. More detail here - https://www.nrel.gov/pv/cell-efficiency.html
Cost. We can make single junction GaAs solar cells with efficiency close to 30% (damn close to the theoretical limit for a single junction cell of ~33%)and we can make GaAs based multijunction cells that have efficiencies above 40%. It's just that those cells are so expensive that they really are only useful for niche applications (e.g. space).
Meanwhile silicon and CdTe solar cells continue to decrease in price and increase in efficiency year after year. At this point the current technology is cheap enough to be profitably used to supply power in many regions of the world. Any one who says we need a break through in solar to make it economic is wrong. At this point the solar panels themselves are less than half the cost of a solar installation.
Nearly all press releases talking about efficiencies over 50% are just talking about unpractical theoretical limits that would require cooling the solar cells to freezing temperatures or other schemes (i.e. talking about the carnot limit instead of the shockleyy-quiesser limit).
Interesting work - seems like this could also be used to create a thin film that could yield thermal vision, if indeed the nanotubes upshift IR frequencies to visible light. But I’m not a physicist.
This has been discussed in several previous threads on HN. From what I’ve been able to gather, creative ways to reduce the energy consumption of A/C will require a change to building designs, and/or increase the cost and complexity of new buildings. Whereas the cost of running the A/C is up to the future tenant(s).
This is why you see some companies applying eco friendly tech to some new buildings (Apple) but not in general commercial development.
Again this is just what I’ve surmised from reading threads like this. YMMV
Interesting to contemplate what could be done to shift those incentives around. Obviously there are certifications (LEED), and there's straight-up regulation, but would there be a way to mandate that a builder or landlord is responsible for a portion of future HVAC expenses such that they are motivated to get this right upfront?
I’m sure if some energy saving tech was commercialized and reasonably cost effective local municipalities could update building codes. LEED standards could be updated as well. What goes into a building isn’t entirely up to the developer.
But right now most of this stuff is just in a lab. Frankly the tall cooling towers you see on some buildings weren’t in widespread use until at least the 50’s even though they were invented at the turn of the century.
The market should (and has in the past) figured this out. Energy costs money, tenants want to spend less money. Tenants will pay more money up front to save themselves money down the line. Therefore landlords should have an incentive to build energy efficient structures even if they aren’t paying for the energy.
The problem is when an energy source with an high externality (climate related or otherwise) isn’t priced in. This isn’t a hard problem for the government’s perspective though: just price in the externality through some free market system. Carbon credits are a good example of this.
Part of the reason is that it is hard to gauge what the energy bill of an apartment will be before you start renting. That is where certification should come in.
> channel mid-infrared radiation (heat energy) into light energy
> absorbs thermal photons and emits light.
Goddamnit, no! None of this is "heat", it's just one range of light being converted into another! This is shitty "science"-journalism parroting a common misconception.
After all, the only reason we associate infrared with "heat" in the first place is that it's useful for detecting things which happen to be at a range of temperatures, temperatures which just happen to be slightly warmer than the operating state of self-reproducing bags-of-mostly-water on a small rocky planet.
As far as I can tell, this is functionally just a coating with low emittance in the IR outside a narrow band. A matched PV device that is kept cool can, in principle, approach the Carnot efficiency for the temperature difference between the emitter and the PV junction. If the emitter is the sun, then the source temperature is very high and the Carnot efficiency isn’t a major limit. If the source is a solar panel, I’m having a hard time seeing how this is useful.
I can see this being somewhat useful as a no-moving-parts heat engine for something like a solar concentrator, but there’s another relevant thermodynamic limit: even if this magic material has emissivity 1, it won’t radiate at a greater power per unit area than the blackbody spectrum predicts. At non-crazy temperatures, this is not very high, which will limit output for small things like solar concentrator targets.
So I can see this being useful to convert waste industrial heat, or maybe as a bottoming engine for a combined cycle plant, but I am having trouble understanding how it could be useful for solar.
For solar, it somehow has to integrate with the solar panel, so that the same square meter is used both for direct photovoltaic generation of electricity, and for this IR capture, so energy is obtained from a wider band.
I understand your point that it can't just be driven by a secondary IR emission from a warmed-up solar panel; that's not hot enough to be that useful.
In a more detailed article, there is talk about the carbon nanotube device being useful because it can withstand high temperatures:
One sketch of such a system is to physically mate an effective high T solar absorber to an effective narrow spectrum photon emitter (as described in this work). Then that can be coupled with a typical solar cell with bandgap matched precisely to the emitter wavelength. So your solar cell is near ideally efficient for the photons it receives. As I understand the emissivity of these materials can exceed blackbody radiation in the near-field but not the far-field (or something like that? Search "Superplanckian emission").
> Then that can be coupled with a typical solar cell with bandgap matched precisely to the emitter wavelength. So your solar cell is near ideally efficient for the photons it receives.
One way or another, once you've converted sunlight to heat, you are limited by the Carnot efficiency. For the 80% efficiency they claim, if all of it comes from thermophotovoltaics, they need a hot side temperature at least 5x ambient, which is over 1000 C. I wish them luck getting anything resembling a solar panel up to 1000 C. (I'm not, in any respect, saying it's impossible -- I'm saying it's very hard. You'd need excellect spectrally or directionally specific absorption to avoid re-radiating all that heat out the top of your panel, and you'd need conventional transparent insulation to stop conduction.)
On top of that, super-Plankian emission or no, if it's limited to the near field, then the PV cell is very, very close to the hot surface. That PV cell needs to be kept near room temperature to get that efficiency.
This whole thing seems extraordinary complex for something that wants to be cost-effective.
The absorber/emitter assembly doesn't really resemble a solar cell. Demonstrated tungsten emitters have exceeded 1500K - it's not really that wild, it's what's in incandescent light bulbs. You concentrate sunlight typically.
> Researchers at Rice University developed a method to convert heat into light that could boost solar efficiency from 22% to 80%
The conditional tense signals that the researchers didn't actually do so, but that the study might enable it. The article re-iterates that this is speculation:
> The implications of their discovery are significant. Research from Chloe Doiron, a Rice graduate student, revealed that 20% of industrial energy consumption is wasted through heat. It could also mean an increase in the efficiency of solar cells, which are currently only 22% efficient at their peak. Recycling the thermal energy from solar cells using carbon nanotube technology could increase the efficiency to 80% according to the researchers. ...
If you want to get pedantic, you can't "make electricity" either because 'electricity' isn't a defined concept. Usually when people use words such as "make electricity", they actually mean to refer to energy or power transferred using electromagnetic fields interacting with a conductor. But a purely chemical reaction that charges a battery could be said to "make electricity" without even involving that.
I was going for a simpler idea: Light is photons, electricity is moving electrons. And yeah, if you're talking about electromagnetic fields we're back to photons again. It just struck me as a sentence that was very misleading to lay people.
On topic of the HN title, there is research being done into carbon nanotube rectennas. Theoretical efficiency is in the upper 90's%. Problem is that they have very narrow bandwidth. Also as far as I'm aware, no one has even built lab samples that do decently.
I get the maybes and the caveats, the misleading title but the direction seems clear - solar efficiency is a tractable materials science problem - and "we" should see this as a penicillin moment - invest hugely into the research and development until we find the right processes and approach to make cheap ubiquitous solar. Manhattan project levels of finding is what I mean - because the pay off is humans cutting their carbon output.
The penicillin moment happened because penicillin actually came into existence and became widely available pretty rapidly.
Research and speculation is awesome stuff, but the hard part isn't discovering new concepts. It is actually making those concepts into a marketable and affordable good.
The evidence for this is that even though there is essentially a new 'breakthrough' discovery every other week for solar panels, or electric motors, or batteries.. We are still using what amounts to cutting-edge tech from the late 90's.
In the modern era this means we generally have to wait till the patents expire and market competition kicks in in order to get the price low enough and the product perfected enough to see widespread usage. If it goes anywhere at all.
It's also worth noting that Florey, the man who is largely responsible in making penicillin practical drug, refused to patent his early innovations to make it widespread as possible.
If this is the case, then wouldn't "buy a bunch of patents and (with great fanfare) make them open-source" be a relatively low-complexity way for a billionaire who feels like making a name for himself to accelerate progress on fighting climate change?
Penicillin had to undergo decade long R&D to go from Fleming's petria dish to a cheap practical drug - I cannot find the article now but I believe the investment levels from the US Military were compared to Manhattan (obviously poorly compared) but the point is this was not the "gosh what luck" story it is in mass media.
Having made that first breakthrough, world class teams across the globe fought to bring the efficiency up from "froth on the top of a brew" to "gallons of the stuff"
Florey was a big part of the story but so were teams in US and Europe and then the US military scaled it up beyond belief.
We spent money, targeted money, on the best teams globally and then put serious industrial might to it once they found the answers.
That exact approach is what I am calling for again.
And as for patents - if enough global effort is put in, with enough government funds, the pressure to put the results "in public hands" rather than hold out for patents is really strong
This is not quite correct. Silicon solar cells have a maximum efficiency of 32%. Various ways, such as using e.g Carbon Nanotubes as Antennas to directly rectify visible light (http://NovaSolix.Com) may eventually yield much higher overall efficiencies.
reply