I mean, that's a little strong. My undergrad degree is in math, so I would be thrilled if I was sitting in an interview and was being asked questions like that instead of stuff about how computers actually work.
>Writing a package manager (been done before, a lot of times) is far different from writing machine learning algorithms (new field, blazing the trail for the industry). One requires an engineer, one requires a scientist.
I mean, a package manager is probably far more rooted in computer science than machine learning (which is basically just applied statistics: software engineering edition).
> I'd say that CS was a superset of math, not the other way around. "Computer" is just the name we give to the whole class of physical tools we use to study information.
Then you'd be ignoring that a whole mess of math is not, in fact, computable (axiom of choice, law of excluded middle, etc). I think the problem is way too many "computer scientists" are pretty much software engineers.
Dude, I'm a grad student in categorical logic and I've worked with PLT. That's why I used them as an example. To formalize most math, the simply typed lambda calculus won't do, you need dependent types. But the logic used in math is almost always extensional, and type checking an extensional type theory isn't computable. Huge chunks of math, from analysis to chaos theory, are simply not computable.
Saying timed exams in math test for speed is a simplification. Perhaps at the underclassmen level, but I've found for a lot of upper level courses the tests were more about making sure you have certain concepts internalized and can make certain inferences without a secondary source. Oral examinations would work just as well, of course, but those are usually considered more stressful.
I don't think you understand that a programming language is a formal language that gets interpreted by a machine. It's pretty much mathematical logic the whole way down, and the fact that you can think they're a cultural/social phenomenon is a testament to the amazing work PL researchers have done over the years.
If it's something a sophomore with no background beyond calculus can handle, it is simple stuff. It's ridiculous that treating software engineers as though they have a bachelor's degree in STEM makes you a "LambdaBro."
I didn't say it's included, I'm saying anyone capable of a bachelor's degree in STEM is capable of picking it up. And if you think you were a "theory" person, but don't know the basics of abstract algebra, then you have an incredibly naive idea of what theoretical computer science entails. Hint: there's a lot of abstract algebra.
If they're just terms, then it doesn't matter that they're terms from abstract math. However, this way there's the benefit that it's super easy to find resources, since you have all of the mathematical literature.
I'm at a Canadian school and work for a pretty well known prof (well known in my field, at least). The more I've encountered people who've always been in "elite" institutions, the less I've been impressed.
Yeah, I know mathematicians are often incredibly productive during their post-docs. Mind you, if you aren't its a kiss of death to your academic career.
I think that's a bit tone deaf. He doesn't think he owns his street so it doesn't occur to him that other people feel that way, and that's a pretty fair stance. That mindset is why a lot of people hate the suburbs.
I'm confused, shouldn't people be supplying type signatures to make the compilers job easier? I use plenty of lambdas in Haskell, but my top level functions are always typed.
Lots of languages have these features and it works fine. Finding the type of an expression is one of the canonical applications of a constraint solver.
>Just five days in, the MacBook Pro models had already hit almost 80% of combined 2015 and 2016 MacBook sales – and look set to exceed 18 months of sales in the first week.
So Apple is having incredible sales numbers for Apple, not compared to some smaller manufacturers.
People are excited by it, and managed to deal with prime-ageddon without any major headaches (it's like basic regex search and there's even an update tool, come on).
Well, evaluation in these languages is more transparent. The syntax is basically the lambda calculus, so once you've actually learned the language it makes a huge difference. I'd try reading up on "referential transparency".
I've had some experiences where it really helped. My supervisor uses a teaching stack based VM for his compilers class - it is slow as all hell but very simple, easy to get running, and it gives very good debugging info. Similarly, I can see the value in a simplified graphics API that let's you focus on the basic theory, or a simplified language with Hoare logic for writing loop invariants in an algorithms class.
It's pretty crazy. I'm a Canadian and went to public schools through school, now I'm doing my PhD. When I'm at summer schools or conferences with Americans, it's like half of those students had only ever interacted with other people in gifted programs at private schools.
It's a pretty bizarre misuse of the term. The obvious person to suspect in this situation is the person who has the most to benefit; for example, it's not victim-blaming to suspect a homeowner in an arson case when they got a large insurance payout, it's choosing the obvious suspect. Graduate students'/post-docs whose research is going nowhere have done all sorts of stupid shit in the past (like sabotage another student's work for no apparent reason).
Someone with a failing business is going to be the prime suspect when it burns down, a researcher whose project appears to be going nowhere will be the prime suspect when their samples are sabotaged. It's fairly straight-forward reasoning.
>In order to make these arguments less religious, I think it would be worthwhile to adopt Turing's mathematical philosophy, which called for adopting mathematical foundations on an ad-hoc basis (or even no foundation at all). In other words, choose whatever foundation (if any) for the task at hand. This would make it easier to argue that, say, type theory is a more convenient core for proof checkers.
I think you're misrepresenting the category theorists, they're usually the ones arguing for choosing whatever foundation is convenient for a given domain. A lot of the time, this is a type theoretic foundation because a lot of modern mathematics is about pretending you have function types when you don't actually have function types in your category (such as differential geometry) or that all you care about is a "core" set of operations and the rest of the framework you're working in simply gets in the way (such as Hilbert spaces vs. compact closed dagger categories in categorical quantum mechanics). And if you know already knew dependent type theory then synthetic homotopy theory is easier than classical homotopy theory, some proofs in homotopy theory can take several lectures to present and even then it will still be pretty hard to see why they hold.
It's just weird to see people thing the category theorists are the one being impractical compared to the set theorists like Friedman. I can't think of a categorical logician who doesn't have an active line of research outside of logic; usually in topology, computability, quantum mechanics, or algebraic geometry. They're not just logicians but active researchers in computer science, physics and mathematics, logic is a powerful tool and should be applied in these fields.
Computation is an entirely abstract thing. Nowhere does the definition of Turing machines or the lambda calculus makes any reference to actual physical things.
The lambda calculus preceded Turing machines, in fact Turing worked on the lambda calculus before he published his work on Turing machines. Where on Earth did you pick up this rubbish?
>The belief that Church defined computation rigorously when he solved the Entschidungsproblem using the lambda calculus rather than just made an imprecise conjecture is a common mistake and historical revisionism.
But ultrafinitism isn't actually interesting as a mathematical theory. As the previous poster said, its appeal lies in its "realness". Intuitionistic and linear logic are substantially more interesting.
That's exactly what he's addressing, utilitarians with messed up definitions of utility. Your head needs to be awfully far up your ass to come to the conclusion that slavery is good without questioning the principles that lead to that conclusion.
That topic has actually been researched, the slave states had less developed economies than the northern states. For the same reason all feudal/slave states have underdeveloped economies.
Of course, some people in this thread are probably ready to laud Hitler's economic acumen, while ignoring his economic gains were due to public spending building a war machine and taking Jewish people's stuff.
Do you only value white people's well being? Otherwise, if you actually knew anything about the history of slavery you'd have never had that thought. People aren't actually exaggerating about generational PTSD - splitting up over half of new families, corporal punishment, sexual abuse, the list goes on.
>At this point, tutorials use examples like "like flatMap and Maybe in other languages!" which is even more confusing. Why do I need a monad then, if there are similar constructs in other languages that don't need the understanding of monad? Why the complexity? What do I get from monads?
Monads are a mathematical concept, like a ring. "Maybe" is a monad in whatever language you use it in, just like integers are a ring in whatever language you use them in. Pointing out that complex numbers, rational numbers, nxn matrices are all examples of rings and wrapping everything up into a type class doesn't add to the complexity of the language.