Additionally, I want to experiment with making some AI that generates html and learns from what it's doing by "seeing" the results. I need something to convert html code to rendered pixel colours.
Goal: Convert input to output
Input: HTML5 page+relevant files (in my program's memory)
Output: an X*Y array of pixel colour values (in my program's memory) of what the webpage "looks" like
One obvious approach is to use an off-the-shelf browser, say Google Chrome. My program would save some html file on a hard-drive, then launch Google Chrome that opens said file, wait a few seconds in hopes Chrome finishes the rendering, then take a snapshot of the screen and crops the relevant part of the screen with the webpage.
However, that's inefficient for a few reasons:
- the input goes from my program's memory -> hard-disk file. Then from hard-disk file -> browser application opens it. In theory, it might be faster if the webpage could be processed within memory (but maybe it's not a large bottleneck, is it?)
- have to "wait" some unknown time for Chrome to finish rendering. How would my program know when Chrome finishes? There is no API way as far as I know other than for a user to look at the screen and notice the "loading" indicator come to stop.
- have to capture the browser's rendering output. It goes from browser process memory -> render. Then I capture entire screen -> put back into my program's memory. Again, maybe this won't be a bottleneck, but nevertheless it feels inefficient.
So one way to avoid all those problems would be to take the Chromium source code, and integrate it into my testing app so that everything can be performed by 1 process in memory, avoiding all 3 of the above inefficiencies.
- How hard would it be to do that?
- Is there any existing work to make it easier to do this?
- Is there some API I can use to directly tap into some brower's "rendering" functions (to do what I want directly)?
Edited to better reflect what I'm seeking suggestions for.
Edited by shurcool, 03 April 2012 - 02:10 PM.