Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> If you farm the set of all possible x and try them to see what works you do not have an inverse function in your hands.

I'm not sure I follow, what do I have then? Could you give me an example?



sort by: page size:

> Are there any negatives to this approach?

Well, probs[x] can go negative when you decrease it, if I'm not mistaken.


> Does that imply that it could be possible to construct a system which gives uncomputable results?

They produce a family of systems for which it is not possible to construct an algorithm that can yield a certain property (being gapped) of each system.

> Does it assume an infinite plane or something? (Assume might be the wrong word)

Yes. The system is infinite. (Or, take a limit as the system size grows to infinity.)


>and as such there's no guarantee that you can represent its behaviour by the current algorithms.

Doesn't the universal function approximation theorem provide just such a guarantee? It doesn't guarantee any algorithm will converge on that representation, but the capability to represent it is there.


>Am I missing something fundamental here? Yeah, these aren't magic formula, they are just examples.

>In the first example, the method compute_error_for_line_given_points is called with values 1, 2, [[3,6],[6,9],[12,18]]. Where did those values come from?

It's an example. The first two arguments define a line y = 2x + 1, the pairs are (x,y) points being used to compute the error.

"To play with this, let’s assume that the error function is Error=x^5?2x^3?2"

This is just an example of a function used as exposition to talk about derivatives.

It isn't even an error function though. An error function has to be a function of at least two variables.


> There are an infinite number of functions that can create a particular bit sequence, and each bit sequence can be mapped to a single string, so there must be more functions than strings.

I don't think this logic works. For example, there are an infinite number of "rational numbers" of the form np/p where n, p ? Z, and each of these can be mapped to a single integer (namely, n). But there are not more these "rational numbers" than integers.


> You can extrapolate accordingly.

Hm no I can't. What's the complexity of your algorithm?


> at which point you can really only evaluate all possible combinations

You only need one counterexample, and in this case you can easily find it. E.g., you can note that the duckie in GP's solution can be swapped for the pinwheel+whistle without changing the solution's total cost, ergo the solution isn't unique.

I think a lot of people are looking at this like a coding interview, where it's assumed that your algorithm must work for all possible inputs, rather than just the inputs stated.


> The problem comes in when you repeat the exercise on a different example and find that also 0/0 = 3, or whatever. And now you start to see something is really amiss.

Yes, that's the obvious part.


> This is completely uncontroversial

I think it is still intuitively surprising that some infinite sets somehow have more elements than other infinite sets. The powerset operator is very special because it can create this difference.


>Nope, you don't need the axiom of choice to define the sequence of all algorithms.

No. I'm saying selecting an algorithm out of the set of all algorithms.

I'm not saying defining the set of all algorithms.


> You could take an item out of both bags un­til you ex­hausted the sup­ply of one bag. If the other bag was si­mul­ta­ne­ously empty, then you’d know they had the same car­di­nal­ity.

Well, since you can't exhaust the supply of an infinite supply of integers, I don't see how using this method is acceptable when trying to find a one-to-one correspondence.

I feel like I may be missing something so would appreciate if someone could chime in.


>But what happens when it tries to emulate itself? Well, if it produced any output, it would immediately produce a different output. The way out of paradox is that it returns nothing at all.

Why immediately. It only has to differ by one bit somewhere along the infinite chain right? Why waste that perfectly good output when it can just as well output some of the emulation before interrupting it?


> "I made an algorithm that replicates an observed distribution, so my description of the algorithm is a potential explanation for that distribution."

I mean, technically it is a potential explanation, just not necessarily a correct one.


> However, they are increasingly being used to generate (sic) assist human mathematicians in a variety of creative ways, beyond just brute-force case checking or computation.

can you fix this? even with the (sic) I have no idea what you are trying to say here.


>> if the number is close to epsilon, the error bars on 1/x would be huge.

> Sure, but what's the use case for mathematics where you don't know what side of an asymptote you're on?

Knowing which side of the asymptote you're on does not solve this problem or even ameliorate it.


> How would you find (n-1)! given n!?

If you want to think a little more:

You are given a value v, which you are told is n! for some n, which you are not told. Find a general algorithm to produce (n-1)!.

I have an algorithm which, I fear, may not have the most efficient running time.


> I worry about why it doesn't have to say abs(n) but then I remember two negatives multiplied together are positive.

The function doesn't terminate if n is negative, so you are right to worry!


> So you don't actually know how to do it, but assume that you can. This a little strange.

Not all that strange. I trust the members of the Algorithms Research community when they tell me it's possible to test a number for primality in polynomial time. I have no idea how to do it.


> You don’t have an algorithm either.

Are you saying algorithms don't exist, can't be followed and reasoned about, and proven correct, or not?

next

Legal | privacy