Reflections: RMH Homebase

I had a really great time working with the RMH homebase. It gave me a chance to work in some both familiar and unfamiliar territory. Over the course of the semester I have been working on another project in Drupal. Drupal, is written in PHP like the RMH homebase, and requires a SQL database to be set up for testing. With Drupal I learned how to use LAMP in linux. By the time I needed to use it in the RMH homebase, I was ready. Now there were some new unfamiliar experiences that I enjoyed tackling.

  • SimpleTest- It seemed many of my classmates, as well as myself, have not seen nor used SimpleTest before for unit testing in PHP/SQL. Thanks to advice in my classmates’ blogs I was able to set it up and get it to run for my LAMP server.
  • Refactoring– Many classes at the college go over errors and syntax in code, and they go over documentation and style, but they never teach us how to adapt. When I sat down to look at the RMH homebase code, I got a chance to try and familiarize myself with another programmer’s code. Not just that, I had to refactor it. Refactoring on such a large scale was difficult at first, but the netbeans IDE has a neat feature that allows you to search all the files within a project for certain terms. Also simply trying to understand an unfamiliar complex system quickly was a challenge. By reading the comments thoroughly, and exposing myself to as much code as possible, I was able to really familiarize myself with the key aspects of the code.
  • Considering all angles– Lastly, complex systems, like RMH homebase, taught me that you must look at every aspect of a system, every detail, every inch, and every bit to fully understand the problem and address all possible issues. For example, the login manager completed its task correctly, but it did not complete it with good security. Finishing a task does not mean the task is complete.

These experiences helped to make me a better programmer and software engineer in more ways than one. I am glad to have had the opportunity to work on the homebase and look forward to taking these lessons into my future career.

Challenges of the Software Age

This week, we got a chance to delve into some open software news sources. The task was to take two articles from opensource.com and post our personal response and reflect upon the articles argument or thesis. For my first article, I chose one about Louis C.K., the comedian. I am a lover of comedy, and Louis C.K. is one of my favorites. To be honest, I was little surprised to see an article about him even covered in a software magazine, but once I read the article I fully understood.

The article states its premise outright, “The answer to stabilizing content and price is letting artists retain greater control of their work.” This claim is not unheard of and is a sound premise. It is based off of Louis C.K. providing a download of his stand up special for five dollars a pop. This model was used instead of the production model to prove that ease of access will generate revenue, and that the current system of production is antiquated. Using this method, he spent approximately 250,000 dollars. Surprisingly, he generated 1,000,000 dollars in revenue. The author of the article is arguing that this electronic method provided better user-developer relations, and allowed Louis to make better quality comedy because he had control over the entire production.

This argument needs to be heard many times over, and not just in comedy. Software can benefit from this method of thinking as well. Over and over we hear of software losses due to theft through pirating software. However, as the author of the article is saying, the issue can be solved simply through ease of access and a reasonable pricing model. Another great example of this thought is in mobile application development. The rise of easy to afford, easy to install, and mobile apps demonstrates this key principle: price and production affect piracy. The current structure of the software world promotes attacking individuals for sharing files, and punishing paying users with inconvenient protection measures. In the end, removing this methodology helps customer relations by making the paying user feel less punished for choosing the right way.

Also mentioned in the article is price. Price for software can reach upwards of millions of dollars. How much of this cost is purely administrative? How much comes from unnecessary costs such as advertising and publishing? When you go to the store and look at the sixty dollar game, I can tell you a significant chunk of that price is going straight to the publisher, not the developer. By removing these middle men in the modern internet era, we can reduce the cost of software to a point where it is almost non-existent (open source anyone?). We can create better software by fostering a more direct relationship between the end user and the developer. We can create better software for a better price by being in greater control of our development process. This point is what I construed from the article. Developers must always be agile in the field of fast-paced technology. So why not start adapting now?

The second article I chose was called “A cure of the common troll”. By troll, they are referring to patent trolling. With the rising boom in software comes new technology and innovations. These new technologies can all be patented in order to protect the developer’s copyright. Some companies will arise and have risen, whose sole purpose is to collect patents and then sue infringers. This is the art of the patent troll. The results of this trolling are adverse. For one, patent trolling restricts innovation by preventing smaller companies from developing without being sued out of existence. Another notable reason is that many of these patents have been bought, sold, and traded. These patents are not the true inventors but people who buy or acquire these patents from the inventors for profit. By choosing to do so, they are going against the whole concept of a patent to begin with: to protect the inventor. Lastly, many of the patents are dealt with in an archaic manner. A common analogy is that it is like patenting the door knob or the wheel. These are basic universal components that just cannot be patented because they are so basic and necessary to software development.

The article suggests a way to deal with these trolls of the modern age:

“First, create a compulsory licensing mechanism for patents whose owners are not making competitive use of the technology in those patents. Patent owners should be required to declare the areas or products that incorporate the patented technology. All other non-practiced areas should be subject to a compulsory license fee. (A non-practiced “area” would be a market or technology sector or activity in which the patent owner is not using or licensing the invention rights, though the owner may be using the patent in other “areas.”) Licensing rates for patents could be set by patent classification or sub-classification based on industry average licensing rates for each such technology. Again, this would only apply to applications where the patent is not being practiced or voluntarily licensed by the patent owner.
Given the vast number of patents issued, an accused party should have a reasonable, set time after receiving notice of a patent within which to pay for the license going forward. Compulsory licenses are authorized by the treaties we have entered into, and we have significant experience with compulsory licensing of copyrighted works from which to develop an analogous patent mechanism. Uniform rates could be set.
Second, cap past damages for trolls at $1 million per patent and eliminate the possibility of obtaining injunctive relief for infringement of patents that are not in use, or are not used commercially, by the patent owner.
Third, a mandatory fee shifting provision should be put in place where the plaintiff is required to pay the defendant’s reasonable defense fees if the plaintiff does not obtain a better recovery than what was offered by the defendant. (Presently, there is such a cost shifting mechanism in place; however, the relevant costs typically are a tiny fraction of the legal fees in a case.)
Fourth, for U.S. domestic defendants, require that suits be brought in the venue where the defendant’s primary place of business is located.
Fifth, if a party wants more than limited discovery from the opposing side, particularly for electronically stored information (ESI), the requesting party should pay the cost of production. For large technology companies, ESI production alone can cost into the seven figures.”

I am a big supporter of all these concepts. I would also add to the list, that patents cannot be bought or sold, only inherited or renounced (made open to all). By doing so, patent companies would be insolvent and inviable. Each of the other suggestions from the author are great ideas and should be considered in updating our current system of patent application and distribution.

These two articles discussed some hot button issues in not just open source development, but also in all forms of software development. I particularly enjoyed this assignment and found the articles to be both informative and interesting. I look forward to reading more!

Reflections: Parrallelism via Multithreaded and Multicore CPUs

For any reader’s who are a member of the ACM or IEEE you might be familiar with the magazine, Computer. This segment is a critical look at an article from March 2010 called “Parallelism via Multithreaded and Multicore CPUs”. I will offer my own personal analysis along with general information about what was contained within the article.

The general summary of the article is that it is a “comparison between multicore and multithreaded CPUs currently on the market”. The attributes the article focuses on are “design decisions, performance, power efficiency, and software concerns in relation to application and workload characteristics”. This area is really intriguing to me personally, because it is an area I have always wanted more clarification on. What are the yields of a multicore and multithreaded processors and how important are they, especially when it comes to choosing the right cpu for a new computer.

The article starts off with design decisions. It describes how multithreaded cores have multiple hardware threads to make switching between threads easier and more efficient. The most common approach to switch between threads is known as simultaneous multithreading aka hyperthreading. This threading technique utilizes precoded instructions from only a subset of the threads on the chip. Interestingly the article also mentions that no commercial CPUs issue more than two threads per core per cycle. This information tells me that the way CPUs are threaded is a negligible difference when deciding which CPU is better. The article further explains that the limits on threading are due to scalability. By having more than two threads you have surpassed a “saturation point”. This point hampers your ability to get any more use out of executing more than two threads. However there is a way to work around this dilemma: multiple cores, which is great news. These facts indicate that threading is standard but how many cores you have does make a big difference in capability.

Another consideration is the cache. There are currently three types of caches: shared, private, and dynamic. The latter being very rare. Because of this reason the article compares the major types of private and shared. Shared implies that the cache is shared between the cores, while private implies it belongs to one core alone and cannot be used by other cores. For multicore programs it is better to have shared cache if the software threads need to share data. This method also prevents the need to copy data and if more efficient because it avoids the need to access other caches indirectly between cores. The draw back though is that shared cache is more unpredictable. The software is less isolated and therefore can end up using much more cache than is necessary. Also it makes it difficult to gauge the program’s service to each thread which leads to instability. The private is therefor more predictable and controls performance. These findings depict some of the tricky decisions in CPU choice. It becomes a much more advanced situation of deciding which kind of trade off you want to make based on the software you use. To me I would prefer the private because stability is many times better than speed when it comes to managing memory.

So when it comes to multicore processors, this article suggests that there is no clear cut choice. It all depends on your software’s specific needs and design. Certain hardware is always have certain software designs in mind and CPUs are no exception. Just like life, there is never one true right answer.

Reflection: The Cathedral and the Bazaar

These are my personal reflections about the article The Cathedral and the Bazaar,written by Eric Raymond.

The premise of the article is that software development has two categories: a cathedral style and a bazaar style. According to Eric each choice has their own purposes and uses that make each unique and effective in their own way. However a majority of the article involves an anecdote about him initially discovering what is known as the bazaar style. This anecdote involves his development of the open source software fetch mail. Within the anecdote are pieces of programming wisdom provided by Eric to be used by any programmers reading the article.

As for my personal view, I found the premise to be interesting. I think the theme of the paper is accurate and Eric takes a fair shake on it. He mentions that most of the time the core of a program is cathedral style, while the innovation and tools added onto the program are done bazaar style. This blend of the styles is what makes the most sense to me. You want to have skilled programmers who understand the main concept to develop a strong foundation. Then the community can help build the rest.

I also found the anecdote quite entertaining and a good example of how bazaar style can be effective. I agree with a majority of his claims such as releasing early and treating the user as a co-developer. These kinds of concepts were newer to the field back in 1996 but have flourished today, and for good reason. As Eric points out, the most famous example of the bazaar technique was Linus’ work with the linux system. This system was only the beginning. Today programs such as filezilla and firefox are quite competitive in the market and show that bazaar techniques can lead to a more stable and better built programs.

There is one area where I have to disagree with Eric though. I think he does not address issues appropriately. He writes a great article about how awesome this style is and how great the bazaar is but does not anticipate audience response. Now he may not be going for a purely argumentative paper or persuasive paper, but at the least I would hope he would counter people who say the bazaar style is a poor way of doing things. Like most open source writers he does not appropriately address the issue of payment. The thing about a bazaar is that the people in a bazaar are making money, they do not come there out of the kindness of their heart and do everything for free. This is where is analogy seems to be off the mark.

However, I am not suggesting this is a mortal flaw with the argument for open source. Just as he says near the middle of the article. By making the program open source he had thousands of users looking at bugs and suggesting ways to fix them. I just feel like his analogy incorrectly depicts the relationship between core developers (cathedral builders) and the community (bazaar). A more appropriate way to describe this relationship is a house being built by an architect in a community. Then the community volunteers to help their neighbor make some great improvements to the house, motivated by either their like of the neighbor or just wanting to try out their skills on improving the house.

Either way, this article was entertaining, funny, and convincing. Definitely worth the read.