FinalProject Checkpoint5
FinalProject Checkpoint5
- Checkpoint 5
Checkpoint Description
Our program is finally taking real shape, and as we now start winding down, we have a couple items left to tick off. Thus far, we’ve been able
to bring images into our application, but for the edits to our image to mean anything, we’re going to have to focus in on-
With this, our program will largely, finally become feature complete.
A doodled on bitmap that’s been saved to lts and opened with a n image-viewer
Objectives
1. Objective 1
2. Objective 2
3. Objective 3
Objective 1
Objective checklist:
• Make doodling stop destroying our image
You should try downloading some images for yourself. BMPs are hard to find, but you can always use an online image converter or
something. This particular image can be found in the Checkpoint 5 assignment page
But as soon as we click down on our Edit Window, our pretty image disappears!
Noooooo!
This is unfortunately happening because our entire image is set up to only use a SINGLE Texture. Whenever we click down on our Edit Window,
we’re overwriting all of the pretty image data we loaded into our program. The solution here is to use TWO Textures. One of them can hold our
image data that we loaded from disk, and the other can be used to hold our doodles.
1.1 Refactoring
Luckily, EditWindow ALREADY has a second Texture we can use – the one we inherit from Rec2D!
Let’s use our rectangle’s Texture to start working in that new TWO Texture system.
public EditWindow(Vector2 scale, Vector2 position, Color backgroundColor, Texture imageTex) {
super(scale, position, Color.GRAY);
RecTexture = imageTex;
Removing backgroundColor because our background is going to be our image for now
ImageEditor will now show up with some errors after doing this.
Perform the following -
• Make our call to create our Edit Window take in our loaded image
instead of a Color
• Remove our assignment of _editWindow.DoodleTexture
Do this properly, and now we should be able to draw over our Edit Window without deleting anything.
Objective 2
Checklist:
• Save our image out to a bmp file
o Save our Edits as well
• Create an image scaling algorithm
o Use it to resize our doodles when saving
2.0 Saving our Bitmap header
So, now that we can make awesome little drawings, ideally, we’d like to save our works of art. Let’s hop over to ImageInputOutput.java and write
some code to save our images back out to our hard drive.
public void saveImage(String filePath) {
}
This method will be completely in charge of saving files
First, we’ll need a way to output ANY kind of data. Let’s use something familiar here and create a FileOutputStream inside saveImage
public void saveImage(String filePath) throws IOException {
FileOutputStream output = new FileOutputStream(filePath);
output.close();
}
Make sure to ALWAYS close your streams for memory safety reasons
Our OutputStream is capable of writing out individual bytes of information to a file, like so-
FileOutputStream output = new FileOutputStream(filePath);
byte[] someBytes = {1,2,3,4,5};
output.write(someBytes);
output.close();
So instead, let’s try to utilize SYSTEM MEMORY as much as possible, instead of long-term-storage. Let’s start by simply writing out our bitmap file
header.
If you run your code, you should see output.bmp on your desktop. You can verify you did this correctly by opening both your original image and
output.bmp in a hex editor and verifying that the top portion of both files is identical.
The number of bytes taken up by the header is dependent on the image itself. If you perform this test using blackbuck.bmp, ex pect the
header sections to be significantly smaller. This is why we used startPoint instead of some hard-coded value.
Recall that pixels is the Pixmap we created that stores our image data when we load in an image
We could, by using this Pixmap, instead create a loop that writes out our pixel color data based on the data inside pixels.
To get started, perform the following -
• Add a private instance variable to ImageInputOutput called _pixels of type Pixmap
• Hop inside loadImage()
o After loading the color data into our local Pixmap, set _pixels equal to pixels (Or whatever
you named your Pixmap)
Once you’ve done that, you can create the basic structure of our loop
public void saveImage(String filePath) throws IOException {
FileOutputStream output = new FileOutputStream(filePath);
byte[] colorData = new byte[_pixels.getWidth() * _pixels.getHeight() * 3];
for(int y = _pixels.getHeight() - 1; y >= 0; y--) {
for(int x = 0; x < _pixels.getWidth(); x++) {
}
}
}
Looping through our pixel data
Now, all we have to do is grab our color data for each of the pixels in our Pixmap, and we can save them to our array colorData
And then, we can save colorData out to our file!
To do this, we can start by grabbing each pixel in our Pixmap. We can do this with the following method call
for(int x = 0; x < _pixels.getWidth(); x++) {
int tempColor = _pixels.getPixel(x, y);
_pixels.getPixel() simply grabs the color data from the pixel at a given position
To start with WHY, this is an optimization technique. Recall in our Bitmap that the rgb components of a color were all stored in a single byte? Well,
in LibGDX, colors are represented by rgba (4 bytes), which is coincidentally the amount of space taken up by an integer. LibGDX uses an int to store
color data instead of an array of bytes because it’s faster to work with and pass around in memory.
And to answer HOW, recall the following line we’ve used time and again to create our Pixmaps.
new Pixmap(width, height, Format.RGBA8888);
Format.RGBA888 describes how our image data is stored. This format means that we have red, green, blue, and alpha channels that are each
represented by 8 bytes alongside each other in an integer. This means that a color inside our Pixmap could look like the following
So, all that said, if our OutputStream can only write out individual bytes of data, how do we turn our int into a bunch of bytes?
MANUALLY!
Don’t sigh too hard! I promise this is the last new thing you’ll be learning!
To cover the idea VERY briefly, unit testing is a method of testing code functionality by coming up with test-cases, then comparing the output of our
program to see if the output matches our expected results. If this sounds complicated, don’t worry, it’s actually quite straightforward. Let’s start by
generating a test-case.
I want to turn the following integer into its individual byte components-
543152314
And to put this into an array of bytes, it will look like this
{00100000, 01011111, 11011000, 10111010}
If we now simplify the binary in the array back into decimal numbers, it will look like this
{32, 95, 216, 186}
HOWEVER, because Java is evil, remember that byte has a range from -128 to 127. 216 and 186 are too large to fit in this range, and will underflow.
This means our actual expected results are
{32, 95, -40, -70}
I know, I know, I’m as upset as you are
This will be our test-case. We can put this into code by creating the following method inside Util.java
public static void testIntToSignedBytes() {
byte[] testResults = intToSignedBytes(543152314);
int[] expectedResults = {32, 95, -40, -70};
for(int i = 0; i < testResults.length; i++) {
if((int) testResults[i] != expectedResults[i])
System.out.println("TEST FAILED! INDEX " + i + " IS "
+ testResults[i] + " EXPECTED: " + expectedResults[i]);
}
}
Grabs the test results and compares them to the expected results
All this method does, is call intToSignedBytes with the number from our test case, and compares the answer it gives to the expected answer we just
generated ourselves. If these two don’t match, then it will tell us where our code failed.
Let’s add a call to this method somewhere in ImageEditor.java
public void create () {
Util.testIntToSignedBytes();
If we run it right now, all 4 bytes should cause the program to fail.
So, with our test-case set up, we now have a way to know when intToSignedBytes gives us the WRONG answer
But how do we generate the RIGHT answer?
Consider the following algorithm to isolate the left-most byte.
Starting point:
00100000, 01011111, 11011000, 10111010
Shift RIGHT 8 bytes:
00000000, 00100000, 01011111, 11011000
Note how this destroys the rightmost byte, and adds a zeroed-out byte on the left
Let’s now use our intToSignedBytes method to parse our color data over in ImageInputOutput.saveImage
byte[] color;
byte[] colorData = new byte[_pixels.getWidth() * _pixels.getHeight() * 3];
int colorIndex = 0;
for(int y = _pixels.getHeight() - 1; y >= 0; y--) {
for(int x = 0; x < _pixels.getWidth(); x++) {
color = Util.intToSignedBytes(_pixels.getPixel(x, y));
colorData[colorIndex] = color[2];
colorData[colorIndex + 1] = color[1];
colorData[colorIndex + 2] = color[0];
colorIndex += 3;
}
}
output.write(_fileHeader);
output.write(colorData);
Turning our int into an array of bytes, converting from rgb to bgr, and storing this data into our colorData array and writin g it out
With this all in-place, we should now be able to save the unedited version of our image to our desktop!
So, let’s have our program save our doodles too! Luckily, this is pretty easy, as we’ve already set up most of the program structure we need.
To do this, all we need to do is overwrite the pixel data from our base image with any of the pixel data from EditWindow._doodleMap
To get to this data, perform the following –
• Turn EditWindow.java into a Singleton
• Make _doodleMap a public variable
Now, our program currently is set to only be able to save our image at the start of our program. However, we now want to save our image whenever
the user is done doodling. Let’s really quick, hop over to InputManager, and add that functionality.
Let’s say we only want to save our image when the user presses CTRL + S
It might be prudent to modify this if you’re on Mac OS if the following code doesn’t work for you. Maybe try something like M + S, or
whatever
Our InputManager detects when the user has pressed a button on their keyboard in the method called keyDown. The following code will detect if
that key is the CTRL key
public boolean keyDown(int keycode) {
if(keycode == Keys.CONTROL_LEFT) System.out.println("YOU PRESSED CONTROL!");
The system here is pretty intuitive. To detect the press of “M” for example, you could swap this out for Keys.M
Hurray!
If we keep track of what button has been pressed, we can make our code recognize when a combination of keys has been pressed.
private boolean _controlPressed;
public boolean keyDown(int keycode) {
if(_controlPressed && keycode == Keys.S)
System.out.println("YOU PRESSED CONTROL + S!");
if(keycode == Keys.CONTROL_LEFT) _controlPressed = true;
return false;
}
public boolean keyUp(int keycode) {
if(keycode == Keys.CONTROL_LEFT) _controlPressed = false;
return false;
}
Neato
Finally, just swap out our print statement for a call to saveImage
if(_controlPressed && keycode == Keys.S)
try {ImageInputOutput.Instance.saveImage("PATHTOYOURDESKTOP!!!!!\\test.bmp");}
catch (IOException e) {e.printStackTrace();}
And hit CTRL and S, we should save our file. And if we open it –
The first problem is that we’re writing out color data even where we haven’t doodled. Any place in DoodleMap where we haven’t drawn any color is
going to have the following color data-
(0,0,0,0)
In most file formats, writing this data out to the disk will create fully transparent areas in our image. However, bitmaps don’t support transparency,
so instead of drawing an “empty” pixel, it draws out the following data instead
(0,0,0)
Pure black
This is happening because of a discrepancy between our doodle Pixmap and our image Pixmap. We can expose this issue by printing some
information out inside of saveImage.
System.out.println(_pixels.getHeight() + " " + _pixels.getWidth() + " \n"
+ doodle.getHeight() + " " + doodle.getWidth());
Printing out the width and height of both our Pixmaps
If we run our program and try to save an image, we should see the following in our console
Since our Pixmaps are different sizes, when we overwrite the data in colorData with the color data from doodle, we overwrite the WRONG
positions.
But how is that possible? We can doodle across our entire image , surely, they must be the same size
Okay, yeah, I got lazy. Just imagine I drew over the entire bottom half of the image
Unfortunately, not quite. Recall that our entire Edit Window takes up 500 x 430 pixels in our application.
Vector2 editWindowSize = new Vector2(500, ScreenSize.y - 50);
_editWindow = new EditWindow(editWindowSize, new Vector2(ScreenSize.x - editWindowSize.x, 0), new Texture(editMap));
However, when we read in our image from our hard-drive, we read in as many pixels as the image itself contains
But when we DRAW our image to the screen, we scale it to be the same size (500 x 430) as our Edit Window.
batch.draw(rec.RecTexture, rec.Position.x, rec.Position.y, rec.Scale.x, rec.Scale.y);
rec.Scale.x and rec.Scale.y will scale up/down everything to be the same size as the rectangle it belongs to
This will DRAW our loaded image to look like it’s the same size as our doodle, but it doesn’t change any of the base pixel data.
Our colorData array is also set to be the size of our loaded image, NOT the size of our doodle
byte[] colorData = new byte[_pixels.getWidth() * _pixels.getHeight() * 3];
So this means that if I’m looping through my doodle, and the farthest right pixels are all set to the color orange
Like this
When I try to overwrite the LEFTMOST pixel in colorData, I instead end up overwriting something in around the MIDDLE of the screen horizontally.
The green dot is where we HOPE our pixel gets drawn. The orange dot is where it’s ACTUALLY drawn
This discrepancy continues to compound over and over, gradually distorting our image more and more as we iterate through all of our pixels.
Okay, I get it, jeeze. So what do we do?
The best way to fix this is to SCALE our doodle up/down to the size of our image
Making both our doodle Pixmap and our image Pixmap the same size will fix this issue.
Now, image scaling algorithms have been around for a very long time, and are very numerous. In fact, if you want to try filling out the contents of
this method for yourself, you can start by taking a look at this Wikipedia entry for a rundown on some of the most popular ones.
https://en.wikipedia.org/wiki/Image_scaling
But with so many algorithms, which one should we use?
The EASIEST one!!
We’ll be implementing Nearest-Neighbor scaling in our program. While it can produce visual artifacts, it’ll be fine enough for our program. Now,
image scaling is such a common problem, I’ll actually turn the explanation over from myself to someone else.
https://courses.cs.vt.edu/~masc1044/L17-Rotation/ScalingNN.html
Here, the example attempts to scale a 4x4 image to a 10x10 image. The example looks at just trying to scale the first row of the image at first, just to
explain how the algorithm works.
The article starts by describing each pixel taking up a percentage of each row.
In our first image, pixel 1 takes up 25% of our row
In our eventual scaled up image, pixel 1 takes up only 10% of our row
Lot of text here, but the general idea the article is trying to get across is that to scale an image up, we steal pixels from the original image and place
them into the new image.
if we want to scale up a row of our 4x4 image, it will look like the following
4x4
1 Pixel = 25% width
[Red, Yellow, Blue, Orange]
10x10
1 Pixel = 10% width
[Red, Red, Yellow, Yellow, Yellow, Blue, Blue, Orange, Orange, Orange]
Note how each of the colors still takes up vaguely 25% of the row
If sparks aren’t flying in your head just yet, the article also gives a basic algorithm for us to use as well.
Where targetX and targetY represent the locations that we want to place a pixel on our new image, and sourceX and sourceY represent the pixel we
copy from our original image.
Just to verify, let’s try running this algorithm to perform the operation we showed from above.
Source Image:
[Red, Yellow, Blue, Orange]
sourceX = round(0 / 10 * 4) = 0
[Red, _, _, _, _, _, _, _, _, _]
If you need any more help getting things up and off the ground, remember that you can loop through all of the values in a Pixmap like this
Pixmap target = new Pixmap((int) desiredSize.x, (int) desiredSize.y, Pixmap.Format.RGBA8888);
for(int targetX = 0; targetX < newMap.getWidth(); targetX ++) {
for(int targetY = 0; targetY < newMap.getHeight(); targetY ++) {
}
}
Now go ahead and give it a shot. I believe in you!
If you think you’ve got it, hop back into ImageInputOutput.saveImage, and make the following change
Pixmap doodle = Util.scalePixmap(
EditWindow.Instance.DoodleMap, new Vector2(_pixels.getWidth(), _pixels.getHeight())
);
HURRAY!
Objective 3
Checklist:
• Make our program detect when an image has been dropped onto the program window
• Regenerate our background image based on the image dropped inside
3.0 Dragging and Dropping
Alright, before we wrap up, let’s add one last piece of functionality. As-is, our program is hard-coded to read in only whatever image we tell it to.
Ideally, we’ll want our users to be able to decide which images get loaded in.
There are a lot of ways this is normally handled, but we’re going to do this in the laziest way we can to get it working because we’re getting really
close to the end of the project now.
We’re gonna work in the ability to choose our background image by letting the user drag and drop pictures into the program. We’ll start by hopping
into DesktopLauncher.java over in ImageEditor-desktop
We can make our program recognize when files have been dropped on top of the application very easily by using some nifty functionality plucked
from LWJGL
ImageEditor editor = new ImageEditor();
config.setWindowListener(new Lwjgl3WindowAdapter() {
public void filesDropped (String[] files) {
}
});
new Lwjgl3Application(editor, config);
new Lwjgl3Application(new ImageEditor(), config);
For those who want to know what we’re doing here, we’re essentially subscribing to an event system set up by LWJGL. When the lower-level parts
of our program detect that files have been dropped into the program, it will call this filesDropped method.
We can hook this code up to actually do things by first adding another method to ImageEditor.java that we’ll call filesImported.
public void filesImported(String[] filePaths) {
}
Just bear with for a moment, everything will make sense really soon
Now we can hop back into DesktopLauncher.java and make filesDropped call filesImported.
public void filesDropped (String[] files) {
editor.filesImported(files);
And now, we can go into filesImported and add some print statements to see what this code is doing
public void filesImported(String[] filePaths) {
for(int i = 0; i < filePaths.length; i++) {
System.out.println("You dropped " + filePaths[i]);
}
}
And you can see that it prints out the file path to whatever item we dropped in
Since we’re also now making it so that our program doesn’t necessarily load an image in at the very start of the program, let’s also slightly change
how we generate our EditWindow
Pixmap editMap = ImageInputOutput.Instance.loadImage("testImage.bmp");
…
_editWindow = new EditWindow(editWindowSize, new Vector2(ScreenSize.x - editWindowSize.x, 0), new Texture(editMap));
Even re-opening old doodled images, and making even more doodles on top of them
It’s been a long journey, but you’re now at the final steps! At this point, our program is just about feature-complete, but there’s still some strange
things we haven’t done a lot with, like that little button in the bottom of our screen. Luckily, our last checkpoint will more or less be a victory lap for
us. We’ll be creating very little new functionality, and primarily just be extending some of the features we already have, while also making our
program just that little extra bit COOLER