After months of only using arrays in C++, I have recently started using vectors as my go-to basic data structure. While arrays are great for lower level code and tasks that require the highest level of efficiency, in most cases vectors will prove to handle things in a much simpler and more elegant way. I personally like that vectors provide the user with a way to retrieve the size of the vector. It is no longer needed to keep track of the size of an array with a separate variable, which can really make vectors easier to work with.
Often times when using simple arrays it can be a nuisance to deal with managing the objects they hold. Vectors also work magically with memory. When vectors go out of scope, it calls the destructor on the elements it holds. I would argue that using a smarter container like vector instead of an array decreases development time because it helps reduce programming errors.
One of the interesting caveats involved with using vectors is the way iterating is handled. At first glance, it can be easy to assume that a simple for loop from int i = 0 to some_vector.size() would work just fine.
for (int i = 0; i < someVector.size(); ++i)
cout << someVector[i] << " ";
Well, it does, but the field that holds the size in vector is unsigned, so a type warning occurs at compile time, and that means there is probably a better way to do it. The first, simple fix, is to simply change the type of 'i' in the loop to an unsigned type so that the comparison i < someVector.size() will not result in a warning. Unsigned int will do, but it turns out vector actually uses the size_t type for this purpose, and that ends up being a far more correct solution than simply using an unsigned specified type.
for (size_t i = 0; i < someVector.size(); ++i)
cout << someVector[i] << " ";
We can still do better by using iterators instead of using the standard for loop. Vectors have both a begin() method and an end() method that both return iterators. (Obviously begin returns an iterator to the beginning of the vector and end returns an iterator to the end). Hell, we could even use rbegin and rend which return reverse iterators. Here is the simplest way to use them in one for loop:
for (vector<T>::iterator it = someVector.begin(); it != someVector.end(); ++i)
cout << *it << " ";
Notice that in this case we had to dereference the iterator object since it is simply pointing to the consecutive elements in someVector.
Anyways, vectors are proving themselves to be a very, very cool alternative to arrays. When using them just be careful that you initialize them properly if the initial size is known. Check them out! http://www.cplusplus.com/reference/stl/vector/
Monday, March 12, 2012
Sunday, March 4, 2012
The Orthodox Canonical Class Form
Every "good" programmer has heard it and knows it: the canonical class form, the "must-haves", the four methods every class needs to possess a certain degree of thoroughness in acting out its purpose. In case you have forgotten, I will go ahead and relist them again here:
1. Default Constructor
2. Copy Constructor
3. Assignment Operator
4. Destructor
While this explanation will be stated according to Timothy Budd's Introduction to Object-Oriented Programming book, I believe he does a great job of outlining each of these methods and iterating their purpose to the reader. The default constructor is used to initialize objects and data members when no other value is readily available. There is of course in most cases a default constructor, but relying on the default constructor is typically not a smart choice. The copy instructor is used in the implementation of call-by-value parameters. The assignment operator is rhetorical, and the destructor is invoked when an object is deleted. The copy constructor, to me, seems to always be the least necessary. How often do I actually need to copy an object when I already have access to the original? I typically pass objects as pointers, but I can see how working with others and failing to write a copy constructor could cause somebody to shoot themselves in the foot when passing values by value.
I am sure most readers have these four functions hardwired into their brains, but it is always good to reiterate! I personally fail to always take care of these things, and my goal is that this post will remind me to think through what I am doing and about the issues that can arise when the canonical class form is discarded. I would enjoy hearing some other purposes for the canonical class methods in the comments below!
1. Default Constructor
2. Copy Constructor
3. Assignment Operator
4. Destructor
While this explanation will be stated according to Timothy Budd's Introduction to Object-Oriented Programming book, I believe he does a great job of outlining each of these methods and iterating their purpose to the reader. The default constructor is used to initialize objects and data members when no other value is readily available. There is of course in most cases a default constructor, but relying on the default constructor is typically not a smart choice. The copy instructor is used in the implementation of call-by-value parameters. The assignment operator is rhetorical, and the destructor is invoked when an object is deleted. The copy constructor, to me, seems to always be the least necessary. How often do I actually need to copy an object when I already have access to the original? I typically pass objects as pointers, but I can see how working with others and failing to write a copy constructor could cause somebody to shoot themselves in the foot when passing values by value.
I am sure most readers have these four functions hardwired into their brains, but it is always good to reiterate! I personally fail to always take care of these things, and my goal is that this post will remind me to think through what I am doing and about the issues that can arise when the canonical class form is discarded. I would enjoy hearing some other purposes for the canonical class methods in the comments below!
Sunday, February 26, 2012
CGDB: My New Favorite Debugger
I have just recently begun delving in the use of debuggers. For some odd reason my previous professors never encouraged their use or mentioned their existence. As I have started to move into more projects involving C and, in Object-Oriented Programming, C++, the C/C++ debuggers available have proved to be a godsend in finding and correcting bugs. First I began with standard gdb, which is nice, but I really yearned for something that integrated a little better with the source code and had more readability. Then I found out about the -tui flag for gdb. This opens a nice window for the source code and creates a nice gui in the terminal for gdb. Stepping into certain functions pulls up the source for that function regardless of what file it is in, and setting breakpoints does not require me to have the file open in another pane or tab in order to figure out the correct line number. After that, I thought gdb -tui was all I would ever need, and I would never go back to using standard gdb. Many people would argue that ddd is much better, but given that it uses X windows, I would much rather use a terminal debugger and keep my hands off the mouse. The graphical front end is nice, but it involves too much clicking, and the font ddd uses is horrendous.
Then I found something amazing: cgdb, the curses debugger. Given that I am a die-hard vi user, this nifty debugger is one of the coolest tools I have found for debugging C and C++ code. The front page of the cgdb website, which I have linked above, shows a few of the features:
Then I found something amazing: cgdb, the curses debugger. Given that I am a die-hard vi user, this nifty debugger is one of the coolest tools I have found for debugging C and C++ code. The front page of the cgdb website, which I have linked above, shows a few of the features:
Features
- Syntax-highlighted source window
- Visual breakpoint setting
- Keyboard shortcuts for common functions
- Searching source window (using regexp)
- Scrollable gdb history of entire session
- Tab completion
- Key mappings (macros)
It follows the same general escape and insert modes that vi uses, has syntax-highlighting, which neither ddd or gdb -tui have, and allows for regex searching through the source code. With cgdb I can still work in the terminal and continue to maintain the vi-oriented mindset I have become so accustomed to. I doubt I will ever come across another debugger as clean and readable as cgdb. It has earned its spot in my toolset, and although it is just an ncurses frontend to gdb, it is very well thought out and made a great tool even better.
Sunday, February 19, 2012
Never forget the stupid solution
If there is anything the last project taught me, it is to not declare variables on the heap if it is not required (see my previous blog post), and to always make a stupid solution first. A stupid solution is simply the quickest and least complicated way to solve a problem. Before going to attack a problem with guns blazing, it is always better to get to know the problem a little first. A lot of times programming I feel like the problem at hand is always some kind of opponent, and taking necessary preparation and precautionary measures is imperative if beating the opponent is something to be desired.
In my particular case, the failure to create a stupid solution resulted in a lot of pointless work between me and my partner. Now that the project is done, I am finding more and more that had we chose to create the simple solution first, it would have decreased the time spent on the project by > 10 hrs, possibly more. We went into a fight with guns blazing, throwing classes, linked lists, and structs at a problem that required nothing more than a few arrays. As I stated in my previous post, we still learned quite a bit about constructors, destructors, and the creation of linked lists in c++.
Sure, it is always nice to try and be fancy, but being fancy does not always get you the grade, or get you the job, or let you keep your job. Dr. Downing stresses on every project the importance of making the stupid solution first. Now that I have seen firsthand the consequences of failing to do so, the creation of a simple solution is now on my checklist for solving programming and other problem solving problems. After the creation of the simple solution, it was only a few tweaks away from the final product (albeit a few very thought-provoking tweaks) and ended up being a much shorter and understood solution than what we had before.
I encourage everyone to do the same. It might seem at first to only waste time in getting to the final solution to a problem, but like I said, the battle can be better fought if you get to know the opponent a little better first.
In my particular case, the failure to create a stupid solution resulted in a lot of pointless work between me and my partner. Now that the project is done, I am finding more and more that had we chose to create the simple solution first, it would have decreased the time spent on the project by > 10 hrs, possibly more. We went into a fight with guns blazing, throwing classes, linked lists, and structs at a problem that required nothing more than a few arrays. As I stated in my previous post, we still learned quite a bit about constructors, destructors, and the creation of linked lists in c++.
Sure, it is always nice to try and be fancy, but being fancy does not always get you the grade, or get you the job, or let you keep your job. Dr. Downing stresses on every project the importance of making the stupid solution first. Now that I have seen firsthand the consequences of failing to do so, the creation of a simple solution is now on my checklist for solving programming and other problem solving problems. After the creation of the simple solution, it was only a few tweaks away from the final product (albeit a few very thought-provoking tweaks) and ended up being a much shorter and understood solution than what we had before.
I encourage everyone to do the same. It might seem at first to only waste time in getting to the final solution to a problem, but like I said, the battle can be better fought if you get to know the opponent a little better first.
Monday, February 13, 2012
Australian Voting
If a professor tells you that declaring variables on the heap is not necessary for a project, simply do not do it. This is the lesson I am currently learning in regards to project 2, Australian Voting. The problem itself is very interesting, demonstrating a voting scheme I have never seen before. Put simply, all voters rank the candidates from their favorite to least favorite, and no candidate receives a majority vote in a round of voting, the losing votes are moved to their next ranked candidate. Seems simple enough, right? It is, except a doable problem can easily become a huge chore when terms like "new", "delete", and "pointer to a pointer" start being thrown around. While this is taking me and my partner a little longer than it should, I know that in the long run it will pay off, because we are learning things that will help not only later in this class, but later on in the industry as well.
To be specific, we first started off using a linked list to store ballots (we will not go into details until after the project is due), and after that turned into a nightmare, we used vectors and arrays, and lastly we are using just arrays. The linked list poised the most problems, where we were deleting elements in the list twice unknowingly. The same thing happened with vectors, and it was not until later that the bad memory management was just the beginning of our problems. Lesson learned: Do not blow a project out of scope. Do not make it more than it has to be. If there are no requirements restricting use of the heap, by all means use the stack, and think about all the hassle you are saving yourself.
To be specific, we first started off using a linked list to store ballots (we will not go into details until after the project is due), and after that turned into a nightmare, we used vectors and arrays, and lastly we are using just arrays. The linked list poised the most problems, where we were deleting elements in the list twice unknowingly. The same thing happened with vectors, and it was not until later that the bad memory management was just the beginning of our problems. Lesson learned: Do not blow a project out of scope. Do not make it more than it has to be. If there are no requirements restricting use of the heap, by all means use the stack, and think about all the hassle you are saving yourself.
Sunday, February 5, 2012
Pair Programming
As a student in computer science, I am often given the choice to either work on a project alone or with a partner. While it seems like a trivial choice to me to go ahead and double the amount of brain power available on a project, there are studies that show benefits of both. There were two papers pertaining to the benefits of pair programming that were required readings for class. The first, titled All I Really Need to Know about Pair Programming I Learned in Kindergarten, essentially iterated through a list of adolescent morals creatively applied to the practice of effective pair programming. I find the list so interesting I will just go ahead and post it here:
Share everything.
Play fair.
Don’t hit people.
Put things back where you found them.
Clean up your own mess.
Don’t take things that aren’t yours.
Say you’re sorry when you hurt somebody.
Wash your hands before you eat.
Flush.
Warm cookies and cold milk are good for you.
Live a balanced life – learn some and think some and draw and paint and sing and
dance and play and work every day some.
Take a nap every afternoon.
When you go out into the world, watch out for traffic, hold hands and stick together.
Be aware of wonder.
While some seem to be a little forced in terms of relating them to good pair programming practice, most gave an interesting take on how to effectively program with a partner. "Wash your hands before you eat" refers to the act of washing out any form of skepticism in regards to pair programming. Each partner needs to really "buy in" to the idea in order for it to work. "Flushing" keeps people from getting too attached to code that does not work. Sometimes a fresh slate is needed in order to go on with the project. Of course, there are many other ideas here that emphasize the act of sharing the work with the partner and keeping each other from stressing out. Taking breaks is important along with not taking things too seriously.
The second paper, which was a study on First Year Students' Impressions of Pair Programming in CS1, demonstrated not only many of the benefits of pair programming, but also those of working alone. Often times difficult problems require intense thought that can only be available when working alone. Then again, it makes sense for two people to come up with their own ideas for difficult problems and then later collaborate to see which algorithm or approach yields the best solution.
In my years in computer science I have found a few stubborn people that believe they are indeed smarter and more efficient than the other partner when paired up for pair programming assignments. It is this kind of attitude that employers do not want to see, and even if you are smarter than your partner, it never hurts to have another resource consistently available at your disposal. Two brains are almost always better than one.
Thursday, February 2, 2012
Frustration With Memory Management
Well, this might be slightly unrelated to "object-oriented programming", but the issue still applies to C++ and is applicable to the nature of this blog. I have just finished my first real project in C, and while not it was not in Professor Downing's class, it taught me a good deal about memory management. The project involved creating a list of sorted point "objects" (which were really structs), and to my eyes it seemed the idea lent itself well to the creation of a linked list data type. The sorted point structure would hold a pointer to the head of the list and the number of elements in the list. It seemed simple enough, but creating linked lists in a safe language like Java and trying to apply the same concepts to C do not completely work. C (and C++) will not simply throw away the nodes in the list when they are removed. Instead, they will sit there until the code tells them that they are free and available for use again. The main problem I had was that the sorted point structure was being allocated twice in the heap: once in the tester file and once in the sorted point init function. While it took much longer than it should have to find the bug, I gained some knowledge on some tools that we have used in 371p and some that I am positive I will be using. For the former I am referring to Valgrind, the utility which essentially yelled at all of my lost blocks of memory. While sometimes the information it gives can be a little ambiguous, it is definitely correct, and it can make you feel like an idiot even though your code might be producing the correct output. I will say that seeing the following three lines can be a huge sigh of relief when debugging code:
definitely lost: 0 bytes in 0 blocks
indirectly lost: 0 bytes in 0 blocks
possibly lost: 0 bytes in 0 blocks
Speaking of debugging, I had a first-time experience with another tool today: GDB, the GNU Project Debugger. It is a very cool utility that allows you to trace through your code, step through functions, see the stack trace, set new values to variables, set breakpoints, show the values of variables at any time, and much more. I mainly used it with the -tui flag which pulls up a nice interface in the terminal that will show you where you are in stepping through the code. While debugging is never a fun task, it was interesting getting to use such a real debugger for the first time.
Memory Management is a beast that demands perfection from the person writing the code. There is no room for mistakes and no place for slackers.
definitely lost: 0 bytes in 0 blocks
indirectly lost: 0 bytes in 0 blocks
possibly lost: 0 bytes in 0 blocks
Speaking of debugging, I had a first-time experience with another tool today: GDB, the GNU Project Debugger. It is a very cool utility that allows you to trace through your code, step through functions, see the stack trace, set new values to variables, set breakpoints, show the values of variables at any time, and much more. I mainly used it with the -tui flag which pulls up a nice interface in the terminal that will show you where you are in stepping through the code. While debugging is never a fun task, it was interesting getting to use such a real debugger for the first time.
Memory Management is a beast that demands perfection from the person writing the code. There is no room for mistakes and no place for slackers.
Subscribe to:
Posts (Atom)