A good architect should lead be example. He should be able to fulfill any of the positions within his team. Withoug a good understanding of the full range of technology, an architect is little more than a project manager. It is perfectly acceptable for team members to have more in-depth knowledge in their specific areas, but it's difficult to imagine how team members can have confidence in their architect if the architect doesn't understand the technology.
The architect is the interface between the business and the technology team and must understand every aspect of the technology to be able to represent the team to the business without having to constantly refer others. Similarly the architect must understand the business in order to drive the team toward it's goal of serving the business.
Architects should be brought into the team at the earliest part of the project. They should not sit in an ivory tower dictating the way forward, but should be on the ground working with the team. Technology choices should be made pragmatically through hands on investigation or using advice from architect peers.
Tuesday, April 24, 2012
Tuesday, April 10, 2012
JSR 330 : Contexts and dependency injection for the Java EE platform
Using the JSR 330 API is pretty simple. First you define an injection point in a class, which wants to use a concrete implementation of a service
Injecting a service using annotations is an ok approach, but what I don't like is the lack of configuration options. What would for instance happen if I create two different implementations of the BusinessService interface and want to inject these two into different beans? Do I then have to also create two qualifier interfaces to distinguish between them? To me that sounds lame.
A possible JSR 330 implementation might configure it's beans in one major XML file. Would it be so hard to for instance have the qualifier configuration there instead? The benefits would be:
// injection-point; no get/set needed... @Inject private BusinessService service;Next you need to provide an actual implementation
public class DefaultBusinessService implements BusinessService { ... }If you have more versions of this service, you have to create a Qualifier annotation:
@Qualifier @Target( { TYPE, METHOD, FIELD }) @Retention(RUNTIME) public @interface FancyService { }You use the Qualifier to inject a more meaningful implementation of the service :
@Inject @FancyService private BusinessService fancyService;As you see, using JSR 330 is pretty simple. So, what's the drawback with this solution?
Injecting a service using annotations is an ok approach, but what I don't like is the lack of configuration options. What would for instance happen if I create two different implementations of the BusinessService interface and want to inject these two into different beans? Do I then have to also create two qualifier interfaces to distinguish between them? To me that sounds lame.
A possible JSR 330 implementation might configure it's beans in one major XML file. Would it be so hard to for instance have the qualifier configuration there instead? The benefits would be:
- No compile time dependencies to one given qualifier interface
- Changing the business service would just be a mather of changing the XML file
JSR 330 might work in a standalone project where the configuration never changes, but I really don't see how this should work in a large software system where configuration and requirements can change pr customer and installation...
Tuesday, April 3, 2012
What today's software developers need to know
Today's software developers don't have to worry about many things that their predecessors used to, like coding to minimize RAM consumption even if it means significantly longer execution time, or WAN connections maxing out at 14.4 kilobits per second. (Although, there may be some out of the fashion skills they could benefit from or that may yet regain relevance.)
However, the reverse is also true: there are many new skills and areas of expertise that today's software developers, hardware developers, system and network administrators, and other IT professionals need that simply didn't exist in the past. (Where "the past" could be anything from "more than three months ago" to five, ten, twenty or more years.)
Knowing what you need to know matters, whether you're just starting out as a software developer (or planning to become one), or are a "seasoned" professional who wants to keep your chops fresh so you can stay in, re-enter, or advance.
So here are what software developers that should add to their existing knowledge portfolio.
"Programmers don't learn that someone else is going to take care of the code they write," criticizes Sarah Baker, Director of Operations at an Internet media company. "They don't learn about release management, risk assessment of deploy of their code in a infrastructure, or failure analysis of their code in the production environment -- everything that happens after they write the code. They don't learn that a log is a communication to a operations person, and it should help an operations person determine what to do when they read that log."
However, the reverse is also true: there are many new skills and areas of expertise that today's software developers, hardware developers, system and network administrators, and other IT professionals need that simply didn't exist in the past. (Where "the past" could be anything from "more than three months ago" to five, ten, twenty or more years.)
Knowing what you need to know matters, whether you're just starting out as a software developer (or planning to become one), or are a "seasoned" professional who wants to keep your chops fresh so you can stay in, re-enter, or advance.
So here are what software developers that should add to their existing knowledge portfolio.
Using libraries
"One thing that strikes me as a new skill is the need to work with massive pre-packaged class libraries and template libraries in all the new languages, like Java or C++ or Python," says consultant and software developer Jeff Kenton. "It used to be that once you knew the language and a small set of system calls and string or math library calls, you were set to program. Now you can write complex applications by stringing library calls together and a little bit of glue to hold them all together. If you only know the language, you're not ready to produce anything."Asynchronous programming and other techniques
"Because of the move to cloud computing mostly through web-based interfaces, we are seeing an emphasis on asynchronous programming," says Itai Danan, founder of Cybernium a software development and web design consulting company. "Ten years ago, this was mostly used by transactional systems such as banks, hotels and airline reservations. Today, all but the simplest applications require asynchronous programming, mostly because of AJAX. This is a very different style of programming -- most things taught about software optimizations do not apply across the network boundary."A breadth of skills
"It's become more important to have a breadth of skills" says Ben Curren, CoFounder, Outright.com, which offers easy-to-use online accounting and bookkeeping software for small businesses. "For example, web developers these days need to understand customers, usability, HTML, CSS, Javascript, APIs, server-side frame works, and testing/QA.""Programmers don't learn that someone else is going to take care of the code they write," criticizes Sarah Baker, Director of Operations at an Internet media company. "They don't learn about release management, risk assessment of deploy of their code in a infrastructure, or failure analysis of their code in the production environment -- everything that happens after they write the code. They don't learn that a log is a communication to a operations person, and it should help an operations person determine what to do when they read that log."
Agile and collaborative development methods
"Today's developers need to have awareness of more agile software development processes," says Jeff Langr, owner, Langr Software Solutions, a software consultancy and training firm. "Many modern teams have learned to incrementally build and deliver high-quality software in a highly collaborative fashion, to continually changing business needs. This ability to adapt and deliver frequently can result in significant competitive advantage in the marketplace.Developing for deployability, scalability, manageability
"Sysadmins are likely to own the software for much longer than the developers -- what are you doing to make their stewardship pleasant enough that they look forward to your next deployment?" asks Luke Kanies, Founder and CEO of Puppet Labs: "This includes deployability and manageability. New technologies are often much more difficult to deploy on existing infrastructure because developers haven't had time to solve problems like packaging, running on your slightly older production OS, or connecting to the various services you have to use in production."Monday, April 2, 2012
Dependency injection
Dependency injection (DI) is an approach to the testing of computer programs, the purpose of which is both to improve the testability of a large software system and to simplify the deployment of software components within that system. DI is an example of a design pattern in object-oriented computer programming.
In a short time I will write about JSR-300 and why I don't like it.
Dependency injection involves at least three elements:
- a dependent consumer,
- a declaration of a component's dependencies, defined as interface contracts,
- an injector (sometimes referred to as a provider or container) that creates instances of classes that implement a given dependency interface on request.
The dependent object describes what software component it depends on to do its work. The injector decides what concrete classes satisfy the requirements of the dependent object, and provides them to the dependent.
In conventional software development the dependent object decides for itself what concrete classes it will use. In the dependency injection pattern, this decision is delegated to the "injector" which can choose to substitute different concrete class implementations of a dependency contract interface at run-time rather than at compile time.
Being able to make this decision at run-time rather than compile time is the key advantage of dependency injection. Multiple, different implementations of a single software component can be created at run-time and passed into (injected) the same test code. The test code can then test each different software component without being aware that what has been injected is implemented differently.
There has been several ways of doing dependency injection in your Java application. The idea behind them is well understood, but the actual realization is slightly different.: XML files or property files vs Java annotations or classes. Choosing between which framework to use can become a pretty religious discussion.In a short time I will write about JSR-300 and why I don't like it.
Friday, March 23, 2012
How to shutdown an ExecutorService
The new executor framework in Java 6 makes it dead easy to create components running in a background thread. Just create an executor, give it a java.util.Runnable and that's it. But how do you do a proper shutdown of a ExecutorService?
The next step is to wait for already running tasks to complete. In this example we will allow the running tasks to complete within pTimeout seconds. If they don't complete within the given amount of seconds, we invoke shutdownNow(). This method will invoke interrupt on all still running threads.
As a good practice we also make sure to catch InterrruptedException's and shutdown everything immediately.
1. pExecutorService.shutdown();First we invoke the shutdown method on the executor service. After this point, no new runnables will be started.
2. try {
3. if (!pExecutorService.awaitTermination(pTimeout, TimeUnit.SECONDS)) {
4. pExecutorService.shutdownNow();
5. }
6. catch (final InterruptedException pCaught) {
7. pExecutorService.shutdownNow();
8. Thread.currentThread().interrupt();
9. }
10. }
The next step is to wait for already running tasks to complete. In this example we will allow the running tasks to complete within pTimeout seconds. If they don't complete within the given amount of seconds, we invoke shutdownNow(). This method will invoke interrupt on all still running threads.
As a good practice we also make sure to catch InterrruptedException's and shutdown everything immediately.
Monday, March 12, 2012
What should you cache?
A good way of solving performance problems in an application is often to add caching at strategic layers of the application. But what should you cache?
For me, the single most important thing to cache is to everything that makes a network request.
Performing a network request will always have an overhead caused by the TCP/IP protocol, network latency, the network cards and the Ethernet cables. Even the slightest network hick up might cause huge performance issues in your application. A slow database will seriously decrease the performance of your application.
It is often not possible to cache everything that makes a network request, but not doing so should at least be a conscious decision and not just something you forgot to implement.
For me, the single most important thing to cache is to everything that makes a network request.
Performing a network request will always have an overhead caused by the TCP/IP protocol, network latency, the network cards and the Ethernet cables. Even the slightest network hick up might cause huge performance issues in your application. A slow database will seriously decrease the performance of your application.
It is often not possible to cache everything that makes a network request, but not doing so should at least be a conscious decision and not just something you forgot to implement.
Friday, March 9, 2012
Public methods and package private classes
Given these two classes
both in package org.mydomain. What will happen if I create a new instance of ConcreteClass in another package and try to invoke doSomething()? Will that work?
The answer to this question is it depends on the JDK.
The Sun JDK allows you to access a public method in a package private class. However, OpenJDK will throw
I'm not sure what the JDK specification says about this, but the moral is
Do not have public methods in package private classes.
class abstract AbstractClass {
public void doSomething() {
System.out.println("Hello world");
}
}
public class ConcreteClass extends AbstractClass {
}
both in package org.mydomain. What will happen if I create a new instance of ConcreteClass in another package and try to invoke doSomething()? Will that work?
The answer to this question is it depends on the JDK.
The Sun JDK allows you to access a public method in a package private class. However, OpenJDK will throw
java.lang.IllegalAccessException: Class MyClass can not access a member of class ConcreteClass with modifiers "public"
I'm not sure what the JDK specification says about this, but the moral is
Do not have public methods in package private classes.
Saturday, March 3, 2012
Where does Node.js stand?
I recently became aware of Node.js and I'm trying to sort out where Node.js fits in the server side development picture. I found a few introductory videos from Ryan Dahl which sort of gave me the impression that Node might be the way of the future. So naturally the first thing I did from there was to Google "Node.js sucks". And of course, like anything that anyone thinks is good, somebody has to explain why that first guy was totally wrong. Whenever I hear the type of argument where one side says "X is the best possible," while the other side says "X is the worst possible," I always assume that X is very specialized – it's very good at doing something that people who like it need to do, but others don't. What I'm trying to put my finger on is just what exactly does Node.js specialize in?
So as I understand it Node.js has a few things that make it a lot different than traditional server-side development platforms. First off Node code is basically JavaScript. Client code running on the servers? That's weird. Also, I shouldn't have said servers (plural) because Node.js requires a dedicated HTTP server – just one server, and it's got to be HTTP. This is also weird. Node's got some clear advantages though. It's asynchronous and events-based, so theoretically Node applications should never block I/O. Non-blocking I/O might make Node.js a powerful new tool for dealing with giant message queues, but maybe it's got more working against it than just being weird.
I think the guys that say Node.js sucks sound kind of crazy, but they do have a point or three. First and foremost is that Node.js is single threaded; then the detractors have a problem with the similarities Node.js shares with JavaScript; and finally, they say that Node.js cannot possibly back their claim of being blockless.
Addressing the concern about JavaScript is tough for me. I'm not an expert with JavaScript and I don't really know its advantages and disadvantages over other languages. I have read detractors state that JavaScript is a slow language because it is scripted and not compiled. I have read JavaScript proponents explain that it's not the language that is either slower or faster, but the way the code is written, meaning that the skill of the coder supersedes the inherent qualities of the language. Both arguments have merit, and I don't feel qualified to pick a winner.
Most server-side developers are very used to running basically linear processes concurrently in separate threads. This method allows you to run multiple complicated processes at the same time, and if one process fails, the other threads can still remain intact. So having a single thread run one process at a time sounds like it would be really slow. I don't think this is the case with Node.js because it is asynchronous and event based, which is a very different model than one might be used to.
Instead of running one process, waiting for the client to respond and then starting another process, Node.js runs the processes it has the data to run as soon as possible in the order it receives them. Then when the response comes back that's a new process in the queue, and the application just keeps juggling these requests. The overall design is such that Node developers are forced to keep each process very short because – as the detractors are quick to point out – if any one process takes too long it will block the server's CPU which will in effect block the application.
So you can't do long complicated processes like calculating pi with Node.js. Apparently they have workarounds for spinning off really complicated processes if you really need to, but that seems to be outside of the scope of the original plan. I think that where Node shines is in routing a high volume of low-overhead requests. Which means to me that Node.js is great for light messaging applications with a high user volume.
Are there other uses I've missed? Are there other issues with asynchronous programming in a single thread? Is there some part of the big picture I'm not seeing? Am I just plain wrong about all of this? Leave me a comment and let me know what's what.
So as I understand it Node.js has a few things that make it a lot different than traditional server-side development platforms. First off Node code is basically JavaScript. Client code running on the servers? That's weird. Also, I shouldn't have said servers (plural) because Node.js requires a dedicated HTTP server – just one server, and it's got to be HTTP. This is also weird. Node's got some clear advantages though. It's asynchronous and events-based, so theoretically Node applications should never block I/O. Non-blocking I/O might make Node.js a powerful new tool for dealing with giant message queues, but maybe it's got more working against it than just being weird.
I think the guys that say Node.js sucks sound kind of crazy, but they do have a point or three. First and foremost is that Node.js is single threaded; then the detractors have a problem with the similarities Node.js shares with JavaScript; and finally, they say that Node.js cannot possibly back their claim of being blockless.
Addressing the concern about JavaScript is tough for me. I'm not an expert with JavaScript and I don't really know its advantages and disadvantages over other languages. I have read detractors state that JavaScript is a slow language because it is scripted and not compiled. I have read JavaScript proponents explain that it's not the language that is either slower or faster, but the way the code is written, meaning that the skill of the coder supersedes the inherent qualities of the language. Both arguments have merit, and I don't feel qualified to pick a winner.
Most server-side developers are very used to running basically linear processes concurrently in separate threads. This method allows you to run multiple complicated processes at the same time, and if one process fails, the other threads can still remain intact. So having a single thread run one process at a time sounds like it would be really slow. I don't think this is the case with Node.js because it is asynchronous and event based, which is a very different model than one might be used to.
Instead of running one process, waiting for the client to respond and then starting another process, Node.js runs the processes it has the data to run as soon as possible in the order it receives them. Then when the response comes back that's a new process in the queue, and the application just keeps juggling these requests. The overall design is such that Node developers are forced to keep each process very short because – as the detractors are quick to point out – if any one process takes too long it will block the server's CPU which will in effect block the application.
So you can't do long complicated processes like calculating pi with Node.js. Apparently they have workarounds for spinning off really complicated processes if you really need to, but that seems to be outside of the scope of the original plan. I think that where Node shines is in routing a high volume of low-overhead requests. Which means to me that Node.js is great for light messaging applications with a high user volume.
Are there other uses I've missed? Are there other issues with asynchronous programming in a single thread? Is there some part of the big picture I'm not seeing? Am I just plain wrong about all of this? Leave me a comment and let me know what's what.
Friday, March 2, 2012
The Last Responsible Moment
In Lean Software Development: An Agile Toolkit, Mary and Tom Poppendieck describe a counter-intuitive technique for making better decisions:
Paradoxically, it's possible to make better decisions by not deciding. I'm a world class procrastinator, so what's to stop me from reading this as carte blanche? Why do today what I can put off until tomorrow?
Making decisions at the Last Responsible Moment isn't procrastination; it's inspired laziness. It's a solid, fundamental risk avoidance strategy. Decisions made too early in a project are hugely risky. Early decisions often result in work that has to be thrown away. Even worse, those early decisions can have crippling and unavoidable consequences for the entire future of the project.
Early in a project, you should make as few binding decisions as you can get away with. This doesn't mean you stop working, of course-- you adapt to the highly variable nature of software development. Often, having the guts to say "I don't know" is your best decision. Immediately followed by "..but we're working on it."
Jeremy Miller participated in a TDD panel with Mary Poppendieck last year, and he logically connects the dots between the Last Responsible Moment and YAGNI:
I think we should resist our natural tendency to prepare too far in advance. My workshop is chock full of unused tools I thought I might need. Why do I have this air compressor? When was the last time I used my wet/dry vac? Have I ever used that metric socket set? It's a complete waste of money and garage space. Plus all the time I spent agonizing over the selection of these tools I don't use. I've adopted the Last Responsible Moment approach for my workshop. I force myself to only buy tools that I've used before, or tools that I have a very specific need for on a project I'm about to start.
Be prepared. But for tomorrow, not next year. Deciding too late is dangerous, but deciding too early in the rapidly changing world of software development is arguably even more dangerous. Let the principle of Last Responsible Moment be your guide.
Concurrent software development means starting development when only partial requirements are known and developing in short iterations that provide the feedback that causes the system to emerge. Concurrent development makes it possible to delay commitment until the last responsible moment, that is, the moment at which failing to make a decision eliminates an important alternative. If commitments are delayed beyond the last responsible moment, then decisions are made by default, which is generally not a good approach to making decisions.
Paradoxically, it's possible to make better decisions by not deciding. I'm a world class procrastinator, so what's to stop me from reading this as carte blanche? Why do today what I can put off until tomorrow?
Making decisions at the Last Responsible Moment isn't procrastination; it's inspired laziness. It's a solid, fundamental risk avoidance strategy. Decisions made too early in a project are hugely risky. Early decisions often result in work that has to be thrown away. Even worse, those early decisions can have crippling and unavoidable consequences for the entire future of the project.
Early in a project, you should make as few binding decisions as you can get away with. This doesn't mean you stop working, of course-- you adapt to the highly variable nature of software development. Often, having the guts to say "I don't know" is your best decision. Immediately followed by "..but we're working on it."
Jeremy Miller participated in a TDD panel with Mary Poppendieck last year, and he logically connects the dots between the Last Responsible Moment and YAGNI:
The key is to make decisions as late as you can responsibly wait because that is the point at which you have the most information on which to base the decision. In software design it means you forgo creating generalized solutions or class structures until you know that they're justified or necessary.I think there's a natural human tendency to build or buy things in anticipation of future needs, however unlikely. Isn't that the Boy Scout motto-- Be Prepared?
I think we should resist our natural tendency to prepare too far in advance. My workshop is chock full of unused tools I thought I might need. Why do I have this air compressor? When was the last time I used my wet/dry vac? Have I ever used that metric socket set? It's a complete waste of money and garage space. Plus all the time I spent agonizing over the selection of these tools I don't use. I've adopted the Last Responsible Moment approach for my workshop. I force myself to only buy tools that I've used before, or tools that I have a very specific need for on a project I'm about to start.
Be prepared. But for tomorrow, not next year. Deciding too late is dangerous, but deciding too early in the rapidly changing world of software development is arguably even more dangerous. Let the principle of Last Responsible Moment be your guide.
TechnicalDebt
You have a piece of functionality that you need to add to yoursystem. You see two ways to do it, one is quick to do but is messy -you are sure that it will make further changes harder in the future.The other results in a cleaner design, but will take longer to put inplace.
Technical Debt is a wonderful metaphor developed by WardCunningham to help us think about this problem. In this metaphor,doing things the quick and dirty way sets us up with a technical debt,which is similar to a financial debt. Like a financial debt, thetechnical debt incurs interest payments, which come in the form of theextra effort that we have to do in future development because of thequick and dirty design choice. We can choose to continue paying theinterest, or we can pay down the principal by refactoring the quickand dirty design into the better design. Although it costs to pay downthe principal, we gain by reduced interest payments in the future.
The metaphor also explains why it may be sensible to do the quickand dirty approach. Just as a business incurs some debt to takeadvantage of a market opportunity developers may incur technical debtto hit an important deadline. The all too common problem is thatdevelopment organizations let their debt get out of control and spendmost of their future development effort paying crippling interestpayments.
The tricky thing about technical debt, of course, is that unlikemoney it's impossible to measure effectively. The interest paymentshurt a team's productivity, but since weCannotMeasureProductivity, we can't really see the trueeffect of our technical debt.
One thing that is easily missed is that you only make money onyour loan by delivering. Following theDesignStaminaHypothesis, you need to deliver before youreach the design payoff line to give you any chance of making a gainon your debt. Even below the line you have to trade-off the value youget from early delivery against the interest payments and principalpay-down that you'll incur.
Technical Debt is a wonderful metaphor developed by WardCunningham to help us think about this problem. In this metaphor,doing things the quick and dirty way sets us up with a technical debt,which is similar to a financial debt. Like a financial debt, thetechnical debt incurs interest payments, which come in the form of theextra effort that we have to do in future development because of thequick and dirty design choice. We can choose to continue paying theinterest, or we can pay down the principal by refactoring the quickand dirty design into the better design. Although it costs to pay downthe principal, we gain by reduced interest payments in the future.
The metaphor also explains why it may be sensible to do the quickand dirty approach. Just as a business incurs some debt to takeadvantage of a market opportunity developers may incur technical debtto hit an important deadline. The all too common problem is thatdevelopment organizations let their debt get out of control and spendmost of their future development effort paying crippling interestpayments.
The tricky thing about technical debt, of course, is that unlikemoney it's impossible to measure effectively. The interest paymentshurt a team's productivity, but since weCannotMeasureProductivity, we can't really see the trueeffect of our technical debt.
One thing that is easily missed is that you only make money onyour loan by delivering. Following theDesignStaminaHypothesis, you need to deliver before youreach the design payoff line to give you any chance of making a gainon your debt. Even below the line you have to trade-off the value youget from early delivery against the interest payments and principalpay-down that you'll incur.
Designing a good API
What is an API
An application programming interface (API) is a source code-based specification intended to be used as an interface by software components to communicate with each other. An API may include specifications for routines, data structures, object classes, and variables.
Why is a an API important
All developers have third party products they prefer to work with. Why do we prefer working with some products, but not other?
Often it comes down to the fact that some products are easier to work with than others. They have better documentation, more intiutive methods and just seem more professional than others.
A bad API causes neverending support request and will in the end become a liability for a company.
What is a good API
A good API is
An application programming interface (API) is a source code-based specification intended to be used as an interface by software components to communicate with each other. An API may include specifications for routines, data structures, object classes, and variables.
Why is a an API important
All developers have third party products they prefer to work with. Why do we prefer working with some products, but not other?
Often it comes down to the fact that some products are easier to work with than others. They have better documentation, more intiutive methods and just seem more professional than others.
A bad API causes neverending support request and will in the end become a liability for a company.
What is a good API
A good API is
- Easy to learn
- Easy to use
- Hard to misuse
- Easy to read
- Easy to maintain code that uses it
- Easy to extend
General principles when designing an API
- Do one thing and do it well
- Functionality should be easy to explain
- If it's hard to name, consider rethinking the design
- Keep it small
- An API should satisfy it's requirements
- When in doubt leave it out
- Implementation shouldn't impact API
- Implementation details
- Don't let implementation details leak into the API
- Minimize accessibility
- Make classes and members as private as possible
- Names should be self-explanatory
- Documentation mathers
- Document every class, interface, method, constructor, parameter and exception
Exception design
- Throw exceptions to indicate exceptional conditions
- Don't force clients to use exceptions for control flow
- Don't fail silently
Wednesday, February 29, 2012
Don't blame the UI design for your bad code
A couple of days ago I was in a meeting where one of the subjects was
The answer from one of the developers actually responsible for creating this crappy code was
To me this is a really bad excuse.
One of the first things I learned about UI programming is to always create a good model. If you have a good model, implementing the actual UI part should be a piece of cake. In this piece of code the model sucked. The developer explained that this was caused by the ever changing UI design. Have you never heard about refactoring? One of the key stones of agile development is to always do refactoring. If your model no longer matches the UI, then refactor it! Don't blame the changing requirements or design for your crappy code.
Why does this swing code suck?"The code we had was messy, had logic spread all over the place and was a nightmare to maintain.
The answer from one of the developers actually responsible for creating this crappy code was
The UI design changed during the development of the UI
To me this is a really bad excuse.
One of the first things I learned about UI programming is to always create a good model. If you have a good model, implementing the actual UI part should be a piece of cake. In this piece of code the model sucked. The developer explained that this was caused by the ever changing UI design. Have you never heard about refactoring? One of the key stones of agile development is to always do refactoring. If your model no longer matches the UI, then refactor it! Don't blame the changing requirements or design for your crappy code.
Sunday, February 26, 2012
Why I hate Maven
Yes, I admin that Maven has improved the development of Java programmers because the tons of dependencies each project brings.
So, why do I hate it so much?
Thousands of network calls
One of the main reasons is because of the thousands of network calls to remote servers to
So, why do I hate it so much?
Thousands of network calls
One of the main reasons is because of the thousands of network calls to remote servers to
- check versions
- check md5
- download pom files
- download jar files
To keep the internal repositories up-to-date Maven has to check all the dependencies in the internal repository daily and each Maven user has to pay the latency cost of downloading a lot of files all the time. A workaround for this problem might be to install nexus or Jenkins on a server on the local network, but you still have the problem with latency on the internal network.
A workaround for me is to do 'mvn clean install' each morning, then go and fetch coffee...
Simple tasks is hard
Simple tasks as creating an executable jar or building a new version of an application should be a piece of cake, but it isn't. All this can be done by using a lot of plugins, but the major problem with these plugins is the lack of documentation.
You are forced into creating a bunch of Maven modules
Separating your code into different modules is good programming practice, but I don't like separating it because the build tool requires it. Creating a simple Maven project often requires you to create 4 or 5 maven modules and that's just to much.
Repeatable builds
Do I really trust Maven to give me repeatable builds? No. I do not trust that downloading dependencies from the internet will give me the same versions of the file each time. A better solution would be to fetch all dependencies from a revision control system.
Saturday, February 25, 2012
Don’t interrupt my colleague
You are in the office, deeply concentrated on solving a problem. Then someone approaches your desk and wants to ask you for a solution to a different problem. What do you do?
Do you give him the answer to his problem?
Do you tell him to come back later or send the question in an email?
This is a situation I guess most developers experience several times a day. Most developers are nice guys and gives a proper reply immediately, but is this really what we should do?
In my early years as an developer I always said “Yes, hang on and I’ll find a solution for you”, but lately I have changed my answer to “Send me a mail and I’ll give you some feedback later”.
The reason behind this is that I want to get rid of all the 100 context switches each day. When someone asks you a question, you have to start thinking about that problem instead of the problem you were working on. When the guy leaves you as a happy man 5 minutes later, you might think: what was I doing 5 minutes ago? It might then take 10 minutes before you are back on track and can are productive again.
If you instead tell him to send you a mail, you can finish off whatever you are working on and then later give him a proper reply. This will make you finish your task sooner, and an added bonus is that your reply might also be better.
Interrupt rules:
- Only high pri crises should be answered immediately
- Send questions on mail or IRC
- Set off specific time of day to answer questions, for instance just after lunch and before you leave for the day.
Thursday, February 23, 2012
Java generics, collections and reflection
Generics are a facility of generic programming that was added to Java in J2SE 5.0. It allows a method to operate on objects of various types while providing compile-time safety. A common use of this feature is when using a java.util.Collection that can hold objects of any type, to specify the specific type of object stored in it.
Generics is one of the new features in the Java language I really like. It makes working with collections easier and the compile-time checking reduces the risk of run time errors.
Generics is one of the new features in the Java language I really like. It makes working with collections easier and the compile-time checking reduces the risk of run time errors.
Lately I have been playing around with a legacy dependency injection system relying on reflection to inject properties into objects. This was when I discovered one of the drawbacks with generics: Reflection and generics are not the best match
This is for instance the code needed for retrieving the generic type of a method returning a generified java.util.Collection
Class<?> getGenericType(final Method pMethod) {
Type[] genericTypes = pMethod.getGenericParameterTypes();
if (genericTypes.length == 0) {
throw new IllegalArgumentException("Method has no generic parameters");
}
Type genericType = genericTypes[0];
Class<?> propertyType = Object.class;
if (theType instanceof ParameterizedType) {
ParameterizedType parameterizedType = (ParameterizedType)genericType;
propertyType = (Class<?>) parameterizedType.getActualTypeArguments()[0];
}
return genericType;
}
As you can see it’s not the prettiest code in the world, but luckily it works.
The Programmers Bill of Rights
- Every programmer shall have two monitors. With the crashing prices of LCDs and the ubiquity of dual-output video cards, you’d be crazy to limit your developers to a single screen. The productivity benefits of doubling your desktop are well documented by now. If you want to maximize developer productivity, make sure each developer has two monitors.
- Every programmer shall have a fast PC. Developers are required to run a lot of software to get their jobs done: development environments, database engines, web servers, virtual machines, and so forth. Running all this software requires a fast PC with lots of memory. The faster a developer’s PC is, the faster they can cycle through debug and compile cycles. You’d be foolish to pay the extortionist prices for the extreme top of the current performance heap– but always make sure you’re buying near the top end. Outfit your developers with fast PCs that have lots of memory. Time spent staring at a progress bar is wasted time.
- Every programmer shall have their choice of mouse and keyboard. They are the essential, workaday tools we use to practice our craft and should be treated as such.
- Every programmer shall have a comfortable chair. Let’s face it. We make our livings largely by sitting on our butts for 8 hours a day. Why not spend that 8 hours in a comfortable, well-designed chair? Give developers chairs that make sitting for 8 hours not just tolerable, but enjoyable. Sure, you hire developers primarily for their giant brains, but don’t forget your developers’ other assets.
- Every programmer shall have a fast internet connection. Good programmers never write what they can steal. And the internet is the best conduit for stolen material ever invented. I’m all for books, but it’s hard to imagine getting any work done without fast, responsive internet searches at my fingertips.
- Every programmer shall have quiet working conditions. Programming requires focused mental concentration. Programmers cannot work effectively in an interrupt-driven environment. Make sure your working environment protects your programmers’ flow state, otherwise they’ll waste most of their time bouncing back and forth between distractions.
About me
This site is all about Java and the daily life of a Java programmer.
Who am I?
- A Java developer with 12 years of experience
- Started out as a novice programmer with the responsibility of creating a rich Java client using Java 1.3.
- As time goes by I have gradually converted into a hard code server side programmer and architect.
- Currently working as a senior server side developer and architect for a large software company.
Subscribe to:
Posts (Atom)