Sunday 30 March 2014

The time complexity of algorithms is given in Big-O notation, the time that asymptotically bounds the average case run time. The best comparative sorting algorithms run in n log n time, while more naive sorts run in n^2 time, though some impractical sorts may take unboundedly long.  Generally, the built-in sorting algorithm for most languages is an optimized quicksort, that uses insertion sort for small values of n.  Some languages, like Python and Java, use a tuned mergesort (java for arrays of objects, not prims).  There are also non-comparative sorts, like radix sort and counting sort, that run in non-polynomial time (n*k, n+ry) and do not compare the values of the list being sorted.  The defining similarity of comparative sorts is that they compare values, to see which is smaller.  When sorting algorithms are being analysed, their time complexity is basically an analysis of how many times numbers are compared.  Slow algorithms like bubble sort might need to compare values and swap array positions up to n^2 times, whereas better sorts like mergesort and quicksort only need to compare n log n times.  While most of the time a fast sort that runs in n log n time will be sufficient for practical applications, other considerations, like the size of n, the worst-case run time of the algorithm, the memory complexity of the algorithm, and the stability of the algorithm, become important considerations with specific applications.  Knowing how to code a practical and efficient sorting algorithm is an important basic skill for all programmers, but similar to coding your own priority queue, becomes superfluous is most languages where such things are built into the standard library.

Saturday 15 March 2014

I haven't been consistent in my slog posts, so I'll post something now.  I did well on assignment one; I had 1 error and got partial bonus marks.  The error was because I didn't give the TOAHModel a number of cheeses value until a stool was filled with cheese. I should have set cheeses equal to 0 in the init method I suppose.  I also failed the 5th bonus test case and I'm not sure why yet, but I'll look.  I implemented the optimal cheese moving strategy, so I'm not sure if it's worth resubmitting.

About assignment 2, I realise after the fact that I did the first part wrong.  Due to ambiguous explanations, I thought the point of the assignment was to create a class that accepts a regex string and provides a tree that is the representation of that string.  After seeing the classes the instructors gave us, I can see I misinterpreted the assignment, as this part is apparently the algorithmic part while last part was class building.  Regardless, I still did what the assignment said, apart from returning a string that would recreate the regex tree, not that that would be hard to change.  In the future please be more clear what you are looking for in assignments that are marked by hand.

Thursday 6 March 2014

This post will concern assignment 2 and exercise 3 since they are currently the problems assigned.  As of right now I have one class that creates a full regex tree with the appropriate number of children for each node.  Apparently the purpose of this assignment is to show our knowledge of inheritance, which is unclear in the assignment. It is also unclear on what exactly our code will be marked on, since the only methods we are asked to implement are __repr__ and __eq__, so it will clearly not be marked by auto-marker but rather by TA.  The fact that I will have to implement other classes that make my code longer and less efficient is trivial, but a pain, and it should have been made clear in the assignment that you need to use several classes, as I believe that a 1 class implementation is much cleaner and more efficient.  As it is the assignment hints to use several classes, but it is left ambiguous as to whether this is necessary.  Basically I do not appreciate that the marking scheme and exact specifications of the assignment aren't laid out clearly, and it is left as an exercise for the reader to assume what the markers want us to do.

Exercise 3 I like, after I realised that given a pre- and in-order it will indeed form a unique tree, which took far too long.

Friday 28 February 2014

Recursion is a very useful and often necessary tool for solving a variety of problems.  In my personal use I probably use it more for DFS and floodfill than anything else. It is particularly useful for mapping problems that can't be solved using a purely dynamic programming approach.  Although a lot of problems that can be solved well with recursion can be solved equally well (and without concern of stack overflow) with looping, recursion is invaluable when looping isn't an option.  In light of our recent assignment, recursion is necessary to writing a regex interpreter, especially if lookahead/behind is being incorporated (which I suppose it isn't in the assignment).  It's also really nice to compress a complex algorithm to a few lines of a recursive function.  Analyzing the time efficiency of that function isn't always nice though.

Monday 10 February 2014

As someone with a strong programming background from high school, most (all) of the learning for this course involves learning the intricacies of Python.  I am most strong in Java and C++ so in the beginning learning a loosely typed language like Python was difficult. I also did not like that Python doesn't require semicolons (though I've since gotten used to it, and since they can be added regardless I suppose it's a moot point).  I have since come to appreciate Python's built in functionality for things like slicing strings and lists, which would in other languages require several lines of code.  One unfortunate necessity in Python is docstrings. Good documentation is obviously important in large projects, and building documentation skills is necessary, however if one knows how to comment/create docstrings, writing them for simple classes or functions is a pain (easy as it may be).  I'm out of things to say for this week.

Bye
-Kieran