NoshBar's Dumping Ground




I recently discovered the HTML5 Canvas, it's lovely, like the good old days of using $A000:0000!

However, when drawing an image to the canvas it kept coming out all blurry. As it turns out, there are actually quite a few possible reasons for this to happen... here's a short checklist of what I discovered on my quest to fix it.

Setting Canvas dimensions



Most articles I came across mentioned that you should always explicitly specify the dimensions of a canvas, either when creating the element in bland HTML e.g.,
<canvas id="drawable" width="100px" height="100px"></canvas>

or via JavaScript e.g,
canvas        = document.getElementById("drawable");
canvas.width = window.innerWidth;
canvas.height = window.innerHeight;


Setting Canvas style dimensions



The above is actually not enough though, you also need to set the CSS dimensions, changing the above JavaScript to SOMETHING like:
canvas              = document.getElementById("drawable");
canvas.width = window.innerWidth;
canvas.height = window.innerHeight;
canvas.style.width = canvas.width.toString() + "px";
canvas.style.height = canvas.height.toString() + "px";


Catering for different pixel ratios



Testing my code out on my retina iPad, I discovered that window.innerWidth and window.innerHeight were returning half of what they should have.

Turns out there's a variable called window.devicePixelRatio that is present on retina devices, and contains the scale you should use when dealing with dimensions...

...sort of. See the "Drawing Pixels is hard" link below for a much clearer explanation.

CSS image rendering settings



Turns out there's ALSO a CSS variable you can change to affect how things are handled internally, it's called image-rendering

You can use it something like this:
/* applies to GIF and PNG images; avoids blurry edges */

img[src$=".gif"], img[src$=".png"] {
image-rendering: -moz-crisp-edges; /* Firefox */
image-rendering: -o-crisp-edges; /* Opera */
image-rendering: -webkit-optimize-contrast;/* Webkit (non-standard naming) */
image-rendering: crisp-edges;
-ms-interpolation-mode: nearest-neighbor; /* IE (non-standard property) */
}


Pixel offsets



Finally getting to what actually fixed it for me, a silly mistake to make.

I wanted to draw my image centered on the canvas, so ended up doing something like: x = canvas.width / 2 - image.naturalWidth / 2, doing the same thing for y.

This leads to floating point co-ordinates, perhaps starting in the middle of a pixel. Being a JavaScript dunce I simply used parseInt(x) to draw the image, and that seemed to cure it for me.

Oddly enough though, I've seen many mentions of always drawing to x+0.5,y+0.5, but I have no idea why you'd do that... unless you always happened to start at 0.5 in the first place.

References



Drawing pixels is hard

About the image-rendering setting


[ view entry ] permalink print article

Introduction


Calibre is handy for organising eBooks.
Stanza is really handy for reading eBooks on the iPad.
Calibre can act as an eBook server.
Stanza is capable of using this functionality to transfer books from your PC to your iPad.
Hooray, all my book-reading requirements are fulfilled!

*sigh*, of course not.

Stanza had a lot going for it: it was free; it could decode numerous eBook formats; it could download books from Calibre.
Sadly, it now has some issue: it is no longer available; with newer versions of iOS it has a bug where you can't navigate away from resized pages (well, you can, but it's a pain in the gluteus maximus).

As I personally only read books at home (not on holiday) I also found it a waste to have the eBook stored on my device, not only because of the space usage, but the Calibre-connection was really intermittent and unreliable and I'd rather count dust-bunnies on the floor than use iTunes to sync data.

The Stupid Idea


It is thus that I came up with another almost-patented stupid idea:
Make a remote Calibre eBook viewer, almost like a Calibre-VNC.
Make a tiny web-server that reads the Calibre database and feeds book information using JSON, and use a tiny WebApp that requests pages as images.
All processing happens on the server, so PDF pages are rendered to an image of the desired size and fed up as high-quality JPEG/PNG images to the WebApp.

This means that I no longer need to download books to my device, nor do I need to rely on the viewing application to cater for the format I want, as I can simply implement it on the server.
In theory, it means you could also read all your books from anywhere in the world via the internet, even if you forgot to sync it with your device.

The Prototype


As always, when I want to prototype something quickly, I turned to FreePascal/Lazarus.
Lazarus had everything I needed to test the idea:
  • a web-server class to server the JSON and images
  • a zlib class to open a test CBZ comic book
  • image classes to quickly load and resize JPEG images

Next I made a very hacky jQuery powered WebApp page thing to list books and navigate through a chosen book.
The web request "API" was very simple, you could ask how many pages a book had (/info/?id=10) or ask for a page from a book (/view/?id=10&page=13).
Next I made sure that the /view/ "command" supplied the dimensions of the current viewport so that became the native 1:1 "scale" for pages from the book.
The idea was that you could then zoom the 1:1 image locally in JavaScript, perhaps pan it around a bit, and then make another request to the server with a zoom factor, to clear the image up a bit.

I was set, it was all working pretty sweetly, so I went ahead and used libmupdf to test processing of PDF files, and... awesome.
It still has a minor issue where drawing the image to an HTML5 canvas results in it being blurry (despite setting the CSS dimensions to match, despite setting some custom moz and webkit filters), but -hey-, good enough for now.

The Proper Version


So, now it comes to making the "proper" version for people of the world to use, and I'm stuck.
I could make it in Python, but then people would have to install Python to get it to work... perhaps a non-issue?
I could make it in Mono just for my own fun, but again, it's another framework people need to install.
I could simply clean up the Lazarus version I guess, but there are a few niggly weird bits that I'd have to work around, and the resulting executable would be quite large.
I could make a plugin for Calibre, maybe?
I could make a plugin for Sumatra, maybe?
I'm tempted to do a very tiny version in C/C++ using the Mongoose web server library (Lua webpages? Woohoo!), JPEG compressor library and libmupdf... buuut that kind of thing always takes quite long to do properly.

This is all assuming that people would want to use this kind of thing and that I need to take it beyond the prototype...
But coding is fuuuun... can't... stop... self...


[ view entry ] permalink print article



Okay, this will be a short one, but I can't explain how disproportionately joyous I am having finally got shadows working in C.R.A.P.

It turns out I did NOT have to use shaders to do the shadows I wanted, as simple shadow volumes are sufficient to provide me with the sharp, clear shadows I wanted.
(I had finally taken the jump into learning shaders and actually got shadows working ... alright ... with them. But they were either too soft or jagged, and not "precise" enough).

Of course, I say "simple shadow volumes" when I really mean, well, they ARE simple, but there doesn't seem to be a quick explanation of how to calculate them. There are loads and loads of examples out there, each one showing the stencil mapping procedure you need to use, but most gloss over the calculation of the silhouette of the object in the first place.

Sure, if I'm doing stuff in 3D I should know what a dot product is and stuff, but I can't help but feel that there's space for a simplified explanation... one I may just make "soon".

Oh, and in the picture above, C.R.A.P. is currently keeping his cool in the slowly undulating waves of the water surrounding the city.
Yes, that's right, you can "swim"/float in the water, take THAT GTA 1 through (<4).

With the "water" and the shadows, everything is just looking and "feeling" so much better. I would even go so far as to describe things as "not so ugly".

[ view entry ] permalink print article



Just a terrible tiny video update showing the latest features added to C.R.A.P:

  • Different car handling types
  • Portals to different levels/areas
  • Buildings and meshes now go transparent when you're behind them
  • Jumping out of moving/flying cars
  • Pedestrian(s) following A* paths
  • Abusing aforementioned pedestrians so they are no longer capable of following their destiny


[ view entry ] permalink print article

Go home A*, you're drunk.





The figure on the right is what A* looks like when it's working properly.
The figure on the left is what A* looks like when it's been bludgeoned over the head with a dead racoon whilst intoxicated.

As shown by the different colours in the level editor shot (right), parts of tiles are marked as either:

  • unwalkable (red)
  • walkable (yellow)
  • preferred (green)

This means that while a pedestrian CAN walk across a road, they shouldn't always use roads to walk on when there's a (theoretically) safer place to do so: the pavement/sidewalk all y'all.
In order to get this right I simply make unwalkable surfaces an obstacle as usual, but allocate a heigher weighting-scale to the walkable areas than the weight-scale of a preferred tile area.
When I say "weight-scale", I mean that the weight a tile is normally assigned (say 10 for horizontal/vertical movement, 14 for diagonal movement) is multiplied by the scale assigned to the tile area.
open tile.H = distance from end * movement weight * tile area scale

Of course, if you mess that up a little bit ("obstacles? oh, you mean *10000 scale factor right? overflows? no, not heard of them, why?"), you get the image on the left. I don't claim to know why it ended up doing what it was doing, but it found its way eventually, and might come in handy for drunk pedestrians.
Not that they'd ever make it that far without being hit by a car, muh har har!

Oh, the square texture on the level shows the granularity of the path-finding.
It also turns out that it makes it easier to get a sense of perspective with it on, so I may keep it.

Navigation Meshes.



Recast is a lovely looking tool. It takes 3D geometry and calculates walkable areas depending on a large, large variety of factors.
Recast does not have awesome documentation.
Recast only comes with one demo code sample, an incredibly scary looking project with many complicated GUI things that -as someone just looking for how it works- I don't care about.
Where is my "hello world, walk over me" example? Eeee!

So instead of, you know, having patience and doing things "the right way" and figuring it all out, I simply implemented OBJ export in my level editor.
10 minutes later and I had this lovely test up and running using a build of the aforementioned scary-looking-project:



It all looked peachier than James and his giant produce, until I started looking closer.
No matter what settings I used, I could not get the mesh to produce a walkway between two buildings that are far enough apart for a tiny block to walk through.



Not only that though, I don't know how I would differentiate between "walkable" and "preferred" areas. It just generates one large polygon for the pavements and street, which is not really what I want.

That all said, this is by no means a fault of Recast. This is all just me being stupid.
I went back to the scary GUI project and found a single file that you can basically see how everything works, and it's not toooo bad.
But for right now, I'm just too keen to prototype to mess around with this too much.

[ view entry ] permalink print article

<<First <Newer | Older> Last>>