Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I think what this needs is a proper scan and vectorization.

Author has a nice repo with other doodles:

https://github.com/girliemac/a-picture-is-worth-a-1000-words

I really wish there were some way to find and navigate good OERs like these.



sort by: page size:

Yup, will do that. I found several anime pictures that did not work remotely as well as the examples.

I'm building this actually (been at it for a couple years :-/). But you know what, deviantArt got a lot better at browsing pictures a few months ago, with their 'more like this' feature. You should check it out.

In the same vein, I've been using Zwibbler ( http://zwibbler.com/demo/ ) for all the illustrations on my blog ( http://vjeux.com ) and it's been really nice to have images that look like they have been hand drawn.

Although I agree that this doesn't seem like the most useful idea, I do think it is really cool and it sounds like some interesting image processing.

To the OP, how did you come up with the idea? Is the algorithm from image to URL complicated?


I guess a better title would be "make clickable links from text in pictures".

That's fantastic, thank you. I've been using TinyPNG.com - will give this a go :-)

I used this: https://learnwithnaseem.com/best-playground-ai-prompts-for-a...

I just took the ones I liked and then deleted out the words that were specific to that image and left the ones that were providing the style of the image. So for example on the first one I would delete "an cute kitsune in florest" but would keep "colorfully fantast concept art". Then I just added a comma separated list of the of the features I wanted in my picture. It took a lot more trial and error than I thought and adding sentences seemed to be worse than just individual words. I am sure I barely scratched the surface of interfacing with the tool correctly but the space is moving so fast its not the kind of thing I want to spend my time learning right now just to have that knowledge deprecate in 6 months.


Looks really good!!!

Consider (down the dev road) export options that optimize the image, like https://tinypng.com/ (I have no affiliation w/ them, other than as a user).


After a bit of back and forth, here's a working one with a persistent image link and the proper proportions http://jsbin.com/amunug/17 :)

I ran that image through the library with the default settings and it came out with an image that is in my opinion much better than all of the approaches shown there

http://imgur.com/a/moP57


So make it more fun:

* Support crosscut shredders (shredded in both directions).

* Make it work even if the shreds aren't all the same size.

* Make it work when there are some pieces missing.

* Show where those missing pieces would probably go.

* Make it work when two images' shreds are mixed together.

* (with missing pieces)

* (generalized to N images)

* Predict what the missing pieces might look like (this one is much tougher).

* Think of ways you could cheat (like searching Tineye for the fragments to find the original image, if it was originally online somewhere).

You can keep going as long as you want. If it's too easy, just make it harder.

Edited because HN doesn't support Markdown or linebreaks. :(


Related (coincidentally) — Google also posted research on a much more efficient approach to image generation:

https://news.ycombinator.com/item?id=39210458


I did that and also got inexplicable holes:

https://i.paste.pics/8bed0cd17629f0e9c852a24162bf381e.png

Otherwise my clumsy, misshapen caricature turned out surprisingly nice. (I mean, relative to how nonsensical its anatomy is.) The shapes are followed very precisely, so yes, blobby input begets blobby output.


Have you tried the open-source SRS https://www.memcode.com folks? With images & formatting & SM2 spaced-repetition algorithm too.

Show HN: I made 7k images with DALL-E 2 to create a reference/inspiration table

https://generrated.com https://news.ycombinator.com/item?id=32824448


so someone can just write a script to generate the full image right? since instructions are the same for each pixel. Would make it easier to check your work..

edit: https://imgur.com/a/UO37L1b


The problem with this is that it's prone to error (doesn't have error correcting bits). Unfortunately, that combined with speed of scanning is really what's key for codes.

I have worked in the space, making some strides in speed & error correction.

Some of my public work is here: https://austingwalters.com/chromatags/

Your best bet is actually an overlay of two codes. A regular image (for humans), plus a code embedded in a color space (see linked post for how to do that).


You could have asked those tools to create images like the ones found in AI catalogs like https://lexica.art/ and https://www.krea.ai/ and then compared with what you can get for $10. This would be a comparison more favorable to AI

It's fun to type in sentences in the URL part, and watch the picture morph around. Even a single letter repeated over and over gives cool patterns.

I wonder if there are any sentences that are also meaningful pictures?

next

Legal | privacy