Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Depending how it works, probably not. If it's just a rounded value computed from the distance, you'd just have to do a small amount more spoofed queries. Each one would give you a thin donut of possible locations instead of a circle, if that's easy to picture. So ~3 attempts would usually give you a quite small area, and then you could narrow it arbitrarily from there.


sort by: page size:

1. Yes.

2. Difficult how? You don't have to perform it on items that can't be automatically searched for. Then you do subtraction, square, check if you're over a kilometer.


They've responded by saying that they're randomising locations - OP finds it's always 1km from his location.

If it's a fixed distance (radius) then only the angle can be randomised. This means that graphing even just a few points from a consistent location will lead to a circle centred precisely on the user's address. Even if it's only partial, an arc can be extrapolated to a circle without any loss.


It depends on when they do the rounding.

If they round the final distance number, all you've done is reduced the accuracy to concentric circles like a dartboard, and you simply need more samples to regain the precision.

If they round the input location to the centroid of some larger region-- a square several kilometers on a side or a map shape for a region-- there's no precise information to be leaked.


You can make one easily enough. There is a location trigger and you can scale the distance. It may not be as precise as you'd like and the smallest zone was still pretty large.

If the rationale is privacy this is probably not the algorithm you want to use, especially if it's possible to force the generation of multiple random points (e.g. by refreshing the map). If you're using a circle with a fixed radius just 3 points would be enough to exactly locate the center. A variable radius could take more, but since the distribution is uniform each additional point would significantly improve the precision.

Is it possible to increase the range of the circle determining my location to an entire city (or even more)? Great idea.

EDIT: by range of the circle I mean radius/diameter.


> Instead of saying "896m", return "<1km" or "1km-3km" etc. Can you still abuse the data to some extent?

Wouldn't eventually samples of arbitrary precision have their "range donuts" overlap heavily at the user's location?

This could probably be automated to track in real-time someone's position.


That would leak if there are multiple data points. In the extreme case, two points just either side of the 3 mile circle resolution would locate precisely (where the two circles touch).

Given N points within M meters or so, which is roughly what you get for multiple measurements of "home" with an error on the measurement, I think you could have a reasonable stab at deriving the location from an estimate of the error on the geolocation plus which circles they fell into. All in same circle makes near centre more likely than near edge.

edit: daily check in, so all the above needs is a tendency to check in from roughly the same location each day.


Yes w/ a basic user account you can do that via the position page - but currently only up to 30km or ~20mi or something. Maybe we can increase the radius span a bit ;)

You probably don't even need to go that far. The randomness will be reduced when using multiple locations for trilateration.

Rounding should work if you round the coordinates instead of the distance. Then you can at best calculate an approximate location very precisely.

Basically, yes. I wrote a long article about this [1] which goes into depth on how algorithms like this work — hope it helps!

[1] https://blog.mapbox.com/a-dive-into-spatial-search-algorithm...


Most likely but I couldn't find an easy way to go to a specific lat/lon coordinate and they make interacting with their system in an unintended fashion fairly annoying. If you click the 'random location' you'll see a POST which contains a JSON body that has lat/lng coords and such, might be able to manipulate that in order to figure out the correct grid # that tomnod assigns to that area.

It’s only a partial solution; you still need to limit the queries. Otherwise, just sample a large number of coordinates and average out the noise.

No because latitude and longitude are infinitely precise only limited by the accuracy of the measuring instrument. With the binary search idea you proposed the dividing lines are more tightly packed longitudinally near the poles.

No the purpose of the function, which is what is returns, is a group of zipcodes within a radius of a give zipcode.

Memorization=tabling which is exactly what I'm doing, but in a static sense.

Yes, its narrow, but the owners of the app don't want more than the 6 options given.

My point is that doing calculating the area of a circle around a give point on earth more than once is redundant. points on the earth don't move so if two points are within 1 mile of each other. They will always be so..


It depends a lot on the implementation strategy. Some databases have GIS features, allowing you to query all points within a radius directly. I know oracle can do this, but it’s a paid addon (spatial and graph) and I don’t know how well it scales.

On databases that have good scanning behavior you can use a geohash approach instead. Insert all points with a geohash as key, determine a geohash bounding box that roughly fits your circle, query the rows matching that geohash in their key, run a “within distance” filter on the results client-side. I used that approach in hbase and it could query really fast. I suspect it would perform well on pretty much any mainstream sql db, but admittedly I haven’t tried it.


Yes had the same solution ~15 years ago when location based search got popular. Searching for anything within some distance to a defined location with perfect earth projection and a circle radius „did not scale“. Ignoring the earth‘s projection and pretending a flat earth with a rectangle search was much much faster.

In the end we used this simplified calculation with added „regions“. So the world was split into 15km*15km squares, so any square being more than x apart could never be in the result set. This could maybe be used with modern postgresql partitioning and partition elimination in a clever way.

And without partitioning maybe clever z-ordering the entries physically in the database (clustering) could reduce a lot of random i/o.


If you look at it in just the right scale: definitely.

The problem is to find the exact spot to look at. Even today, geo-exploration is mostly a joke: drill where experience tells us to. New hotness: Have AI do inference from formalized experience.

There could be a thousand former fort knoxes or NYCs and we, with our methods, only have a snowball's chance in hell to find any.

next

Legal | privacy