Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
JSON for Modern C++ version 3.10.0 (github.com) similar stories update story
2 points by codewiz | karma 1558 | avg karma 4.93 2021-08-18 01:00:52 | hide | past | favorite | 105 comments



view as:

This doesn't appear to be all that "modern": it doesn't support string_view for lookup without extra allocations, and it doesn't support unordered_map (or even use it as the default) for objects.

It seems they're targeting C++11?


people love to complain

People love correct statements. C++11 was release 10 years ago and there are 3 newer versions.

> people love to complain

Do you find it unreasonable to point out that what's been advertised doesn't match what's being offered?


As I tried to describe earlier: "modern C++" is not necessarily "using the latest standard", but rather "C++ since it was updated with C++11 (and later)".

Aiming at C++11 is not a reasonable definition of "modern", as it was C++'s second ISO standard that was published a decade ago and which has since seen between two and three major updates (depending on the opinions on c++14)

It's not the "aiming at C++11", but rather "Write code that does not look odd in a code base that is using constructs from C++11, C++14, C++17, etc." - The library uses C++11 to implement an API that should not feel weird when used together with other containers from the STL.

In C++ land "modern" has become synonymous with post 11. Effectively a domain specific definition. Reasonable considering the difference between pre and post 11. Pre and post 20 will probably be treated similarly in a decade

C++11 was a huge shift in how C++ is written, and term coined for "code written using the new techniques" was "Modern C++". Whether you think that term should instead mean "the latest C++ version" is a different matter altogether.

The C++11 is still common enough that if you want a large usage group it's the target. However, it's parsing to a DOM and doesn't use features like PMR, so the language features needed won't be more than std containers and stuff.

Specially among the C+ community groups.

It is changing and I saw some survey’s showing 17 around 1/2 the shops. I know for my JSON library it wouldn’t be possible without C++ 17 so it’s a loss of eyes

Yeah, thing is how much C++17 is actually used beyond the language version.

For example, C++/WinRT is a C++17 library, yet plenty of samples use C style strings and vectors.


I use a lot of core C++17 isms. They make some things doable and others clearer. But I think C++17 really helped a lot in generic programming in addition to the containers it introduced.

I think it makes sense to target C++11. Just that you shouldn't then call the library "modern".

There is also a SAX parser.

But to be honest, I have not yet played around with PMR.


I know when I was benching JSON Link, I saw 30-50% increase in perf at the time when using better allocators and PMR can help a lot there with things like bump allocators.

For many even using RAII, which even Turbo Vision for MS-DOS made use of, is already "modern".

> and it doesn't support unordered_map (or even use it as the default) for objects.

std::unordered_map has massive overhead in space and time. Maybe I misunderstand your idea, but outside of extremely niche use cases, std::unordered_map is basically never the right tool (unless you don't care in the slightest about performance or memory overhead).


Source? A hash map should in principle be faster than a tree, with far fewer comparisons, and against a computed hash value instead of the whole object, in principle.

Well, std::unordered_map is faster than than std::map, but it is still slow, since it uses lists for its buckets, with lots of dynamic allocation. See also:

https://stackoverflow.com/a/42588384/1593077

and the link there. Not sure that's what GP meant though.


I don't believe it has to, some kind of bucket-optimisation (multiple elements per allocation, instead of one element per allocation) should also meet all constraints the standard sets for this type.

And I don't like it when people use ridiculous hyperbole to describe what is in reality barely perceptible overhead, so it would be good if the person I originally responded to came out and defended his POV. Hopefully using actual numbers instead of agitated handwaving and screaming...


Abseil has a btree based hash map and it performs much better. I think it maintains the same constrains(iterator invalidation/exception safety...) as std::unordered_map.

The issue is that the implementors won't change their implementation of unordered_map as it's an ABI break, to say it simply. It could be better. But also, there are other tools like flat maps/open addressing that are not done in the std library.


> some kind of bucket-optimisation (multiple elements per allocation, instead of one element per allocation) should also meet all constraints the standard sets for this type

Pretty sure that makes it impossible to implement merge without pointer/iterator invalidation, which is a constraint imposed by the standard (https://en.cppreference.com/w/cpp/container/unordered_map/me...).

> it would be good if the person I originally responded to came out and defended his POV

See https://probablydance.com/2017/02/26/i-wrote-the-fastest-has... if you want numbers, or for example this talk by the same author https://www.youtube.com/watch?v=M2fKMP47slQ.


> but outside of extremely niche use cases, std::unordered_map is basically never the right tool

std::map is only useful if you need the data to be ordered, otherwise std::unordered_map is the right default choice.

> std::unordered_map has massive overhead in space and time.

Yes, std::unordered_map uses a bit more memory, but please show me some benchmarks where it is slower than std::map - especially with string keys!

BTW, for very small numbers of items the most efficient solution is often a plain std::vector + linear search.


It's a gradient, from the vector you mentioned all the way up to hash maps.

And ordered maps fall somewhere in the middle.

They might make sense if you're creating massive amounts of medium sized maps.

Or if you need range searches etc.

It's all compromises, there are no hard rules.


> BTW, for very small numbers of items the most efficient solution is often a plain std::vector + linear search.

This is why flat_map is a great solution. It's basically an ordered vector with a map-like interface, but without the memory allocation (and cache locality issues) for each node. Until you start getting into thousands of elements it is probably the best choice of container.


I didn't meant to imply it will perform worse than std::map. My point is that std::unordered_map has very particular design tradeoffs (primarily the requirement for arbitrary load factors and the bucket interface) that fundamentally impose large space and time overhead: Every node has to be stored as separately allocated linked list entry.

That's sometimes ok, particularly if you don't care about performance (beyond asymptotic behavior). But if you do, you'd probably be well-advised to try out hash map implementations that actually don't leave performance on the table by design.


> I didn't meant to imply it will perform worse than std::map

You literally suggested to (almost) always prefer std::map...

> Every node has to be stored as separately allocated linked list entry.

But it's basically the same for std::map. Both containers have similar design constraints. I am aware that std::unordered_map uses more memory than std::map, but why should it be slower in the general case? After all, hashtables trade memory for speed.

> But if you do, you'd probably be well-advised to try out hash map implementations that actually don't leave performance on the table by design.

I agree. But the same is true for std::map (someone has already mentioned flat maps).


I think you might have it backwards. std::map is the one you only want to reach for in cases where you need a sorted dictionary (it sorts after insert)

std::unordered_map is typically the go-to hash map on any given C++ day. You’re probably fine with std::map outside of large n

EDIT: being pedantic


What? I would argue that unordered_map is nearly always a reasonable choice when in need of an unordered associative container. There is nothing particular wrong with it.

Not sure what your rationale is for saying it’s only useful for extremely niche use cases. I would be curious to know why you think so.


It has the right asymptotic guarantees for a hash map, but beyond that it doesn't have much going for it. The degree to which it emphasizes pointer/iterator stability and arbitrary load factors is not a good tradeoff in almost any practical case (unless you do not care about performance).

See https://www.youtube.com/watch?v=M2fKMP47slQ for a more comprehensive discussion, or the sibling threads here.


It uses the ordering of std::map for its own comparison operators and hash function.

The development started in 2013 when C++11 was still modern. Since then, the term "modern C++" is, to my understanding, a synonym for "C++11 and later". Of course, some code may look dated compared to newer C++ constructs, but the main goal is to integrate JSON into any C++ code bases without making it look odd.

The string_view lookup is nearly done, but I did not want to wait for it, because it would have delayed the release even more.

I'm also working on supporting unordered_map - using it as container for objects would be easy if we would just break the existing API - the hard part is to support it with the current (probably bad designed) template API.


You can provide alternative collections if needed by templating basic_json; std::map is just a default. Personally I like using a flat_map which is an ordered map built on top of a vector, which provides nice cache locality and is generally even faster than a hash map / unordered_map except for extremely large objects.

I prefer boost json

It does seem to have a very similar interface to this and has the performance of RapidJSON. So a nice balance in that regard.

How do people go about integrating boost these days? For any cross platform projects I use CMake and Boost seems like a _very_ scary library to try and build/ship on multiple platforms.

I use CMake almost exclusively (even for Windows) and using "find_package(Boost REQUIRED)" works most of the time. A lot of common boost libraries (algorithm, geometry) are header only and are not much of a hassle to include.

IIRC, you would need to modify the statement to "find_package(Boost REQUIRED COMPONENTS filesystem)" if you want to use boost dependencies that need a separate .dll/.so (in this case boost::filesystem). However I rarely need to use these, so I may be wrong.


CMake supports Boost out of the box, you just need to set BOOST_ROOT to your Boost install path.

With proper CI it's not a big deal


What exactly makes Boost scary to ship? I understand that Boost requires some attention to build right across platforms (see zlib support in Windows) but other than this it's pretty straight forward to add a FindBoost statement to your code, and move onto other things.

You should be able to build and include boost json as a standalone subproject in CMake if you are using C++17. (Or also possible to use as header only lib)

It gets far more complicated with C++11, since you also need a ton of other boost modules there.

For more Details you can read the Readme of it. https://github.com/boostorg/json


Use Hunter for dependency management

https://github.com/cpp-pm/hunter

There is a bit of a learning curve, but it's the only dependency manager that does things right (all from within CMake)


    wget boost_1_77.tar.gz
    tar xaf boost*
    cmake ... -DBOOST_ROOT=.../boost_1_77
Works for the huge majority of boost ; most parts that required a build were the parts that were integrated in c++ like thread, regex, chrono, filesystem

FWIW Boost.Json in particular doesn't have any source files, it's just header-only templates. You don't actually need to build or link boost to use it; you just need the header files to be in the include path.

Cool project. Others may also be interested in simdjson which parses at about 4GB/sec.

https://github.com/simdjson/simdjson/blob/master/doc/basics....


yes another great one

these projects are just commendable!


simdjson is an awesome project. However, it's analogous to a SAX parser for XML vs a DOM model. If that is all you need (or if you are building something like a DOM yourself) then it is pretty much impossible to beat. Having an entire JSON document parsed and in memory at once though has a bunch of advantages for a friendlier API.

> which parses at about 4GB/sec.

That's a weird thing to say. Doesn't that depend on what hardware it's running on?


> The CSV format itself is notoriously inconsistent

So what? The format is so simple that the inconsistencies doesn't even matter and hell the format can be inferred from the data and that can even be automated. Unlike JSON/XML, most software can automatically make sense of CSV; JSON needs a developer to understand it to transform it into workable data.

> CSVs often begin life as exported spreadsheets or table dumps from legacy databases, and often end life as a pile of undifferentiated files in a data lake, awaiting the restoration of their precious metadata so they can be organized and mined for insights.

CSVs are the de facto way to export spreadsheets and if you have to do this then the problem are that you are still using spreadsheets in your data flow, not CSVs. Same goes for legacy systems.

Also CSVs comes from places where having more sophisticated data serializers is near impossible and the time to code, compute, parse and analyze CSVs is trivial compared to any other format out there.

> I’ve spent many years battling CSVs in various capacities. As an engineer at Trifacta, a leading data preparation tool vendor, I saw numerous customers struggling under the burden of myriad CSVs that were the product of even more numerous Excel spreadsheet exports and legacy database dumps. As an engineering leader in biotech, my team struggled to ingest CSVs from research institutions and hospital networks who gave us sample data as CSVs more often than not.

Of course if you are working in biotech then you'll work with scientists (horrible coders most of the time) and legacy systems because it's a domain that's very adverse to change that might break stuff. And it's not the CSV format's fault if it's used as the wrong tool for the job.

> Another big drawback of CSV is its almost complete lack of metadata.

Again, emphasis on the right tool for the job. For simple tasks metada are not needed.


You mistakenly replied on the wrong thread. Here's the one you meant to reply on:

https://news.ycombinator.com/item?id=28221654

Your quote "The CSV format itself is notoriously inconsistent" comes from this page which the thread discusses:

https://www.bitsondisk.com/writing/2021/retire-the-csv/


I am not a smart man (sometimes)

I agree, and disagree. You are smart, just not at every moment. Same as me.

If anyone out there has never clicked on the wrong button, let them cast the first stone.


The API side of this library is pretty good. I've generally avoided it due to the compile time, and preferred rapidjson due to both better performance and compile time if I wanted a C++ API, or cJSON if a C API is fine and I really just want an extremely low compile time (lately I've been playing with a live coding C++ context for a game engine, as a side project, I use cJSON there; rapidjson on main project though) (cJSON's tradeoff is it sticks with a single global allocator that you can only customize in one location, and also seems to perf slightly lower on parse).

The cJSON's C API or rapidjson's somewhat less ergonomic API has felt fine because I usually have read / write be reflective and don't write that much manual JSON code.

Specifically for the case where compile time doesn't matter and I need expressive JSON object mutation in C++ (or some tradeoff thereof), I think this library seems good.

Another big feature (which I feel like it could be louder about -- it really is a big one!) is that you can also generate binary data in a bunch of formats (BSON, CBOR, MessagePack and UJBJSON) that have a node structure similar to JSON's -- from the same in-memory instances of this library's data type. That sort of thing is something I've desired for various reasons (smaller file types with potentially better network send time for asynchronous sending / downloads (for real-time multiplayer you still basically want to encode to something that doesn't embed field string names)). I do think I may end up doing it at one layer above in the engine level and just have a different backend other than cJSON etc. too though...


Plusses: as a user of nlohmann-json, I also like the support for json pointer, json patch and json merge patch which comes in handy at times. I like their way of handling to/from_json (declare functions with appropriate signature and namespacing and the lib will pick them up seamlessly in implicit conversions). "standard" container facades are appreciated.

Minusses: although I'd like a way to append a (C-)array into a JSON-array "at once" and not iteratively (i.e., O(n) instead of O(n log n)). Also, lack of support for JSON schema is .. slightly annoying.


Yeah the support for pointer / patch etc. is a definite plus. The customization point thing I tend to do with my own customization points, but it's pretty good if a bunch of libraries settle on a standard customization point (eg. I think serde in Rust has achieved that a bit due to the ecosystem there) (definitely want it to be static).

I didn't realize that about the array complexity. Can you not just initialize a JSON array of N elements in O(N) (conceding a `.reserve(N)` style call beforehand if required)? rapidjson is pretty good about that sort of thing, cJSON's arrays are linked list so I basically think of its performance as at a different level and it's mostly about compile time for me.


I haven't found a way to do that. The only O(n) to-json-array way is using the initializer list constructor AFAICT.

Can you elaborate on your minuses - I don't understand what you mean with "at once" in that context.

For JSON schema, I've found that this third party library works well: https://github.com/pboettch/json-schema-validator

While I agree on you on compile times getting slower with nlohmann/json, I think that its performances are fairly adequate. I've been using it on the ESP32 in single core mode and it's still fast enough for everything. Unless you parse tens of millions of JSON objects per second, you won't really notice any difference between the various JSON libraries.

Ironically, cJSON is _worse_ in my use case due to it not supporting allocators at all, as you wrote. Nlohmann fully supports C++ allocators, so it's trivial to allocate on SPIRAM instead of the limited DRAM of the ESP32. Support for allocators is why I often tend to pick C++ for projects with heterogeneous kinds of memory.

Also, Nlohmann/JSON supports strong typed serialization and deserialization, which in my experience vastly reduces the amount of errors people tend to make when reading/writing data to JSON. I've found simpler to rewrite some critical code that was parsing very complex data using cJSON to Nlohmann/JSON than fixing up all the tiny errors the previous developer made while reading "raw" JSON, parsing numbers as enums, strings as other kinds of enums, and so on.


Great to hear that you made good experiences with user-defined allocators. It would be great if you could provide a pointer to an example, because we always fall short in testing the allocator usage. So if you had a small example, you could really improve the status quo :)

Thanks a lot to you for writing such an awesome library! :)

This is briefly how I use C++ Allocators with the ESP32 and Nlohmann/JSON (GCC8, C++17 mode):

I have a series of "extmem" headers which define aliases for STL containers which use my allocator (ext::allocator). The allocator is a simple allocator that just uses IDF's heap_caps_malloc() to allocate memory on the SPIRAM of the ESP-WROVER SoC.

I then define in <extmem/json.hpp>:

    namespace ext {
        using json = nlohmann::basic_json<std::map, std::vector, ext::string, bool, long long, unsigned long long, double, ext::allocator, nlohmann::adl_serializer>;
    }
where `ext::string` is just `std::basic_string<char, std::char_traits<char>, ext::allocator<char>>`. In order to be able to define generic from/to_json functions in an ergonomic way, I had to reexport the following internal macros in a separate header:

    #define JSON_TEMPLATE_PARAMS \
        template<typename, typename, typename...> class ObjectType,   \
        template<typename, typename...> class ArrayType,              \
        class StringType, class BooleanType, class NumberIntegerType, \
        class NumberUnsignedType, class NumberFloatType,              \
        template<typename> class AllocatorType,                       \
        template<typename, typename = void> class JSONSerializer

    #define JSON_TEMPLATE template<JSON_TEMPLATE_PARAMS>

    #define GENERIC_JSON                                            \
        nlohmann::basic_json<ObjectType, ArrayType, StringType, BooleanType,             \
        NumberIntegerType, NumberUnsignedType, NumberFloatType,                \
        AllocatorType, JSONSerializer>
I am now able to just write stuff like the following:

    JSON_TEMPLATE
    inline void from_json(const GENERIC_JSON &j, my_type &t) {
        // ... 
    }

    JSON_TEMPLATE
    inline void to_json(GENERIC_JSON &j, const my_type &t) {
        // ...
    }
And it works fine with both nlohmann::json and ext::json.

In the rest of the code, everything stays the same; I simply use ext::json (and catch const ext::json::exception&) as if it were it's default version, and it works great. FYI, I'm currently using nlohmann/json v.3.9.1.


For what it's worth, cJSON does support a (global) allocator override, using cJSON_InitHooks().

Here's what I use on ESP32 to push allocation to SPIRAM: https://gist.github.com/cibomahto/a29b6662847e13c61b47a194fa...


Yes, but I don't want to change the allocator for _all_ of cJSON - we often have to use third party libraries which rely on cJSON, shipped as binary blobs, which haven't been tested with that. Nlohmann gives you the ability to pick what allocator you want per instance instead globally, which I greatly prefer.

I now know what my next test will be for DAW JSON Link. I didn't even think of trying an ESP32 and the memory requirements are a small amount of stack space and the data structures being serialized or the resulting output buffer or string.

NM. Looks like it's gcc8 which doesn't fully support C++17


In my experience I've yet to find a C++17 feature I need that GCC 8 didn't have. I've written crazy complicated C++17 on the ESP32 without too much worries (the true issues are always space and memory constraints).

I know it is hitting compiler bugs in gcc8 that I don’t know how to work around without major changes. Then again, it might be fine for most things, I'm just remembering trying it in CI.

The only few bugs I hit were SIGSEGV from the compiler - annoying, yes, but nothing too serious. After I switched some of my desktop projects to C++20 I've started seeing ICE crashes in Clang, GCC and MSVC alike. Always very pleasant, I must say.

This prompted me to relook and it wasn’t as bad as I remember gcc8.4.0. A couple minor changes and I am building now. Cool

The tradeoff would be different in an embedded context, for sure. I don't know that cJSON being worse due to not supporting allocators is "ironic," it's just the tradeoff there. Compile time actually really matters to my usage (changing gameplay code and seeing results immediately). nlohmann json was actually increasing my compile time by nontrivial amounts (like a 50-100% increase).

Re: strong typed -- that's basically what I do with cJSON using a static reflection system in C++ (it recursively reads / writes structs and supports customization points). Agreed that that approach is more sensible than writing raw read / write code yourself.


This is the library I settled on for a project a couple of years ago. Although I was intrigued by the magic of expression templates, my usage is actually pretty straightforward and boring. One thing it has that some of the other libraries I evaluated lack is the ability to parse numeric strings yourself, which I wanted for a decimal floating-point type.

There are lots of other libraries, and surely some faster ones, but I haven't felt any need to change. I was nervous when I got an inexplicable overload ambiguity in xlclang++ for AIX, but it went away when I grabbed a newer version of the library.


I remember having used nlohmann's single header JSON parser library around 2018 on a client app and found it to be pretty nifty. For REST, I had had used Boost Beast.

off topic, is there any tool which can be used to query json file that also has autocomplete functionality for keys?

Huh, I was just in some code that uses this library a few days ago. I wish it included a base64 implementation for more efficiently encoding arrays of bytes or floats. JSON is inefficient for large arrays of numbers.

Why wouldn't you use a base64 library ? This is prime library bloat mindset. Base64 has nothing to do in a JSON lib; there are a ton of implementations out there.

A base64 implementation is very small, but not completely trivial to the point where you should implement it yourself, or paste in a one liner from Stack Overflow. It's widely used with JSON. C++ has no package manager to make it easy to import tiny libraries for purposes like this, so C++ libraries should be a little more self contained than npm packages or rust crates. It would absolutely be reasonable to include a base64 implementation in a C++ JSON library for those reasons.

Uh, it has plenty of methods for more compact encodings? https://github.com/nlohmann/json#binary-formats-bson-cbor-me...

Sure, if you want to change the format of the whole message. If you're communicating with something that accepts JSON and you can't or don't want to change that, then having a more efficient option is nice.

"Modern" C++.

The second paragraph talks about setting a #define before including a header, as if this is still the 90's.


Unfortunately, this antipattern has persisted way beyond the 90s. But, you know, if the only way the standard provides to achieve a certain goal is a shitty hack, people will use the shitty hack.

what's wrong with #define(s)?

The compiler is blind to them, so they can't be reasoned about / carried forward anywhere / allowed for in one way or another.

Moreover, because of `#define`s, the effect of including a file can be entirely different, so a compiler can never say "Oh, I know what's in this include file already, I don't need to include it again in other translation units".

Of course #define's have many uses, and at present cannot be avoided entirely, but C++ has been making an effort to have other mechanisms put in place so as to obviate the use of #define's

In particular, look for CppCon talks about Modules.


interesting. so traditionally the preprocessor has been used to work around incompatibilities across various compilers/platforms and to completely enable/disable features at compile time. what's the new thinking there? can you build big projects across clang/g++/intel without preprocessor workarounds now? has the number of viable compilers in use dropped and compatibility amongst those that live on increased? how about stuff like debugging/instrumentation? or is the current thinking on all that stuff to always build with all of it and enable/disable via runtime branch? (along with some argument that configuration at build time was more trouble than it's worth on modern spacious/fast machines)?

> what's the new thinking there?

In a nutshell: Using compile-time facilities which are within the language itself. Compilers will expose information about the platform, about themselves, etc.

A couple of examples:

* https://en.cppreference.com/w/cpp/utility/source_location

* https://en.cppreference.com/w/cpp/types/endian

> can you build big projects across clang/g++/intel without preprocessor workarounds now?

Well, first of all, s/now/when C++20 is fully and widely adopted.

Still, a good question. Possibly not? I'm not sure. But you would need a lot less of them than previously.

> has the number of viable compilers in use dropped

Not really.

> and compatibility amongst those that live on increased?

Well, if they're compatible with the standard, that says more than it used to.

> how about stuff like debugging/instrumentation?

Maybe if constexpr(in_debug_mode) ? Not sure. I'm not on the committee...


It's definitely not the only way, though.

It can be a trade-off, but becoming less and less so. I'll admit to not looking at the implementation, but when you sell it as "modern" then it's not OK to do it for performance, for example.

And almost every single use I've seen "for performance" actually has zero or negligible performance impact.


Of course you'd get your compiler to globally define that symbol rather than literally using a #define. Assuming you're using CMake, that means using target_compile_definitions()

Well, depending what it is you can also templatize, pass in lambdas, use inheritance, or whatever else you can think of.

#define definitely is not "modern". If nothing else it'll set you up for having problems modularizing it in the future.


The feature in question is to enable a debug mode. That is very much the sort of thing that you want to control from your build system. I've already said how you'd do that with a #define, and you could easily hook that up to a CMake cache variable. How are you going to get CMake to control what base class / lambda / non-type template parameter gets used? The simplest option that comes to mind is to maintain two small separate source files in your application (which typedef to the appropriate template instantiation) and switch between them depending on the build parameter. Compared to an argument to the compiler to set a preprocessor definition, that would be a mess.

A #define is just a source code generator run by the build system, with fewer steps. In my opinion the fact that it has fewer steps doesn't make it less of a mess.

I'm not talking about "The simplest option that comes to mind", but "modern C++".


> A #define is just a source code generator run by the build system, with fewer steps.

OK, if you're getting philosophical, you're welcome to to think of the -D switch as being a type of source code generation.

But there's a big practical difference. target_compile_definitions is one line in your CMakeLists.txt (two if you add a cache variable for it) and everyone can understand it instantly. Whereas creating a myappconfig.hpp.in and using configure_file() and #including it from all your actual source files (yuck) is much greater mental overhead both to create and maintain in practice. I know which I'd prefer.

> I'm not talking about "The simplest option [available in modern editions of C++] that comes to mind", but "modern C++".

I added a little to your quote there, forgive me. But I hope it clarifies that they're the same thing. If modern editions of C++ don't add any facilities that make it easier to do conditional compilation (chosen from your build system), then #define IS "modern C++". The fact that there are other ways of doing it, which you consider cleaner but are clearly impractical, is irrelevant.

(I'm just talking about this particular usage where you #define one constant the same way everywhere. I've seen horrific tricks where you #define one way then #include, then #define something and #include again (yes there are no include guards). Clearly that isn't forgivable, whatever version of C++ you're using.)

For the record, Rust has a nicer option for conditional compilation: feature flags. In principle a future version of C++ could adopt something similar, at which point #define really would disqualify that code from being "modern C++". But clearly that's not available in the most recent standard, or even planned in future standards AFAIK.


It's not philosophical, as it is code generation. The compiler doesn't see the #define. Which means the same source can be rebuilt with different defines and create hilarious undefined behavior.

This also means that there's no way to create a library that supports debug and not debug. And of course no ability for the user of the library to select this at runtime.

Yes, in the world of everything being compiled statically every single time (ala Google's monorepo) this can work, but code generation creates a HUGE number of gotchas, many of which are subtle.

> But I hope it clarifies that they're the same thing.

There're not, though.

void* is "the simplest option that comes to mind", but where a template solution is available, safer, and clearer, it's not "modern C++" to have code with void* all over the place.

"Easier" has nothing to do with it. There are many features in this fine library where the authors have put in effort to make it nicer, instead of just doing the easiest and quickest to implement.


> void* is "the simplest option that comes to mind", but where a template solution is available, safer, and clearer,

You're just playing silly semantics with "simplest" here. I meant that #define is genuinely the best overall solution to this particular problem, for the reasons I already gave. Whereas in void* vs templates, templates are genuinely the best overall solution.


#define makes things worse in almost all cases. It's extremely rare to see it done right, as opposed to just being the path of least resistance, that builds up tech debt where things break in hilarious ways.

I have seen many many well-intentioned "but surely this time it's the best solution", where no, it's just a time bomb.


I would be happy to have a different way to decide if a member variable exists or not without touching anything else.

Depending on context of that question concepts and/or just SFINAE could help with that.

I'm not shitting on the library in general, I'm saying the second paragraph disproves the title.


From the repository's README:

> Speed. There are certainly faster JSON libraries out there. However, if your goal is to speed up your development by adding JSON support with a single header, then this library is the way to go.

It is essentially admitting to being slow(er). We are not supposed to pay for code modernity with performance. In fact, it should be the other way around.

Does someone know the innards of this library as opposed to, say, jsoncpp or other popular libraries, and describe the differences?


The library aims at making anything related to JSON straightforward to implement. For some applications, this is a good compromise. The comment in the README is like the answer to a FAQ "How fast is it?"

> This is a good compromise

You're making the false assumption that there is a compromise to be made - but there isn't. A modern JSON library should be the fastest (or about the same speed as the fastest), and vice-versa.


The comments here are uncharitable. For some reason when someone posts a C++ project all the nitpickers come out of the woodwork. Is it as fast as simdjson? No but simdjson is read only. Is it as memory efficient as rapidjson sax writer? No, but it’s a hell of a lot more ergonomic. Is it “modern” C++? Yes it is, just open the source and you can see you for yourself. Nitpicking which standard is allowed to be call modern is silly. I’d argue modern C++ is a style or way of thinking. Would I use it for all projects? Maybe not but this is a good, high quality library.

Dear @nlohmann and contributors. Thank you for this library, I’ve used it in the past and appreciate your work and efforts.


An awesome library, wonderfully convenient to use. I particularly like being able to use the same code for JSON and CBOR.

been using nlohmann::json for several years. changing from something else (no reason to name it), has proven to be a good decision.

away from SFO, "Modern C++" means C+11 and above.

mr. lohmann, much thanks for an easy to use, robust, complete, and performant library provided for the world to use.

every piece of software has tradeoffs, you have developed a library that is not perfect, rather it is the one people use.

nicely done.


Legal | privacy