I wanted to share some facts:
- HTTPS - all parts of the request is encrypted except the base URL with port used for SSL handshake. So you can be sure that GET request data in URL is fully secure.
- CORS - Works from IE 8 with some issues. Detailed explanation: Xdomainrequest restrictions limitations
Automatic JPG image minimization: Problem
For example, you have a site which most of the content is images. So let’s say we have an online shop which displays all of its images on the home page for each product. So in final result your home page is over few MB of size. You know that you can use lazy loading to decrease the number of images, but still you have over one MB of images.
Normally, you get your design team and ask them to compress images as much as they can. That looks simple enough, but design team has many things to do, and don’t really understand what is the issue with few KB here and there. The interesting thing I found they don’t really care, so the only way to win is to automate 99% of the process.
Automating image compression sound very simple at first:
- Take an image.
- Create a compressed image of specified quality.
- Check if compressed image is still good enough if yes go to step 2 with decreased quality by one.
Is it straight forward? Yes. Is it simple? No. The hard part is the checking if the image is still good enough. Let’s see user based compression:
- User takes an image and adds it to a compression tool
- Compression tool creates a compressed image of specified quality, which on first iteration is 1.
- He determines the visual information loss if the loss is too big he goes to step two with higher quality by one.
- Still not good – the user needs to iterate all the steps to find the best solution it could take him few hours a day.
We can improve over this algorithm by adding automated best quality assessment. Now the user starts where machine finished and if he needs only to adjust the settings if the image looks not good.
So now the problem is making computers understand how good the quality of the image is compared to the original. Mathematicians tackled this issue already and there are many different ways of trying to measure quality programmatically. I found the best way to be PSNR (Peak Signal to Noise Ratio). At this day I was unable to find implementations of SSIM (Structured Similarity) algorithm which should give better results or what algorithm improvements MS-SSIM and MS-SSIM*. But PSNR is good enough for now.
Website optimization. Part1. Theory behind optimization
Right now I am optimizing a website. This time the target site is not a small one. This site has all types of problems and because development took several years it seems to have many issues with it. In this article I will share the basics of optimization so if you are familiar with main ideas of optimization this article is not going to be useful.
There are front-end and back-end parts of site optimization. Normally 20 percent of page load is spent in back-end and 70 percent is spent in front-end and 10 percent is spent in between. This is not a rule just a developer insight and the balance can vary based on different features / code / site styles and so on. I am leaving out technical changes for performance optimizations, like proxies, scale up and scale out strategies and other ways to optimize your server architecture.
Back-end mostly has two issues: no or bad caching and bad data selection. Basically, in most cases caching will fix most of the issues. Sometimes you need to rewrite bad database calls or purely constructed logic containing pyramid selects. For example, you select load a product, product needs to load other data and the other data selects other data and so on.
Because, I am not a back-end programmer, I can share and speak more enthusiastically about front-end optimizations. The main idea – size matters! Less you have - the faster the website will be. I can name two main categories:
- General optimizations – these are way cheaper to do, and give almost instant increase in performance.
- Refactoring optimizations – redesigning the code to run fast / faster. These optimizations require a lot of time and work – but they give you the ability to reach the dream speed of 3s for fresh page load.
- GZIP. Use zipping on all text based content and generated HTML. It will increase the load speed of your site by decreasing the download time for your pages.
- Image compression – choose the right quality of JPG images and you will get the highest performance possible also by decreasing download size.
- CDN – using CDN will distribute you static content over many location based servers which will decrease distance between client-server.
- Minify JS and CSS – this is the way to minimize large code files to more acceptable sizes.
- Caching – your content should have cache headers and should be stored.
- Combine JS and CSS files – less requests means less time spent waiting.
- Location – in a perfect world CSS should go to HEAD tag and JS to the end of the page. Less perfect world it should still be in body, only your main JS library should go to the HEAD tag. DON’T PUT ALL JS AND CSS INTO THE HEADER!!! This style creates a locking load – which makes the site rendering wait for all JS and CSS loaded and parsed before starting to load BODY.
- Sprites – always use sprites for your design elements. Use JPG sprite for photo images and images with no transparency. For images with transparency use PNG. Sprites reduce the number of requests; even decreases image size in some cases. Warning, sprites should target lower than 100 KB size.
- The best size, from my experience, is 32 KB for any request. Not too large and not too wasteful.
- AJAX – you can load everything when you need to (on demand) or load after the main content was loaded already this strategy allows you to save a lot of quality code just by creating a load strategy.
- Use JS templates if you want to decrease DOM size.
- Third party code is bad, load it after page finishes loading.
- Smart feature design: infinitive scrolling instead of loading all items, lazy loading the images which are not visible before scrolled upon and other.
So most of the optimization topic theory is covered, where are a bunch of articles to read on this topic. Several sources I refer to are:
How to become a great specialist
If you think of programming as only writing code, you’re wrong. Programming is a creative process. From my own perspective, programming combines creative and creating personalities. When a coder writes code – he looks at it as a piece of art. This art combines minimalism, realism and perfectionism. If you think that you don’t look creatively into your code, maybe you should?
Basically, a good programmer has not only creativity to think or imagine solutions, but the hands and tools to build it. There are many people who have creative skills and there are many who have creating skills, but people who are blessed by both are rare, that’s why IT doesn’t get thousands of student applications every year.
Even if you have both skills / personalities there are ways to improve. Firstly, the way to go is teamwork. Pyramids weren’t built by one guy, so software shouldn’t be written alone too. Imagine that the code you’re working on is not the company’s code, but it is yours and your teams’ code, you share the code so you should protect it when it is in danger (someone is trying to add bad code to it), nurse it when it is sick or grew up not so pretty and shape it to be the greatest. The first thing – you must take it as your own.
Secondly, communication is very important. For example you have a lot of IT business people who will always try to improve the product and you must communicate with them to find the best or the better solution, don’t just fallow the commands – don’t be a tool.
Thirdly, never be passive, try to improve. Also, time to time try to improve the work process, code or any other part of your work environment. Listening to other people helps a lot, if you have older colleagues you should listen to their offers and advice. I don’t think that playing the game of who is the most dominant one gives good results. You always can disagree – do research to gather more information to have arguments against or change your mind after research if you find others who support the idea.
In conclusion, you are not alone – you are part of the company. You must be very good not only at programming, but with other parts of your daily work life. Loving your work helps!
Learning: Choosing the right back-end
Back-end is the code rendered on the server. So choosing the right back-end is very important for your project. Choosing back-end main factors are - worker price, development speed, setup price and expandability.
For example, skilled .NET labor has a high price per work done - where similar work in PHP is cheaper. The simpler the core the less of a challenge is to master it. From my personal experience, with ASP.NET you can write massive systems in a very short time and not with a highest class of specialist team. For example, three guys can write a social network (social stream, forums, blogs, friend system and several original features) just under six months. Even the labor is not that expensive. Problems start with performance, because fast development doesn’t result in high quality.
.NET and Java are pretty similar - Java is cheaper but slower to develop (my personal view). I am far from an expert of PHP, but friends say it has it’s frameworks to speed up development. Ruby on Rails is a fast growing technology and it seems very promising to me. Many good things I heard about RoR as a choice for web development.
If we are speaking about back-end there is the system architecture topic, which should I mention. ASP.NET by default uses its windows forms architecture which is pretty good, but “eats” way too much performance. In my opinion, today the best practice architecture for web is MVC. Basically in ASP.NET it removes the problems with page state and life-cycles, also it allows better unit testing for the application.
As front-end developer you will end up using one or another template engine. In ASP.NET you will use Web Forms. For ASP.NET MVC you could use Razor Template Engine. PHP - looks and feels as a template engine out of the box. Java has several different template engines: Velocity (nice to work with), JSP which is looks and feels like ASP.NET (or just ASP), Struts XML (not sure if it’s the correct name) and Faces (XML based). Right now I prefer Razor and Velocity as very good template engines.
Learning: The quest for the way into good front-end
How to fix my front-end code? There are several things you should do:
- HTML, CSS are programming languages - they should be treated as programming languages when writing them.
- There are a bunch of rules and strategies to write HTML, some of them are:
- Lego approach - think of HTML blocks as in Lego parts, and construct the page from blocks. In this way, if you create good logo parts based on a list of targets for the application, the page ends up with a bit more HTML. For example, if you target is a lean design, then if you would construct page using a block of code which is designed for different visualization (for example, one day to have rounded corners around your information blocks). So you, in this case, would have some extra HTML, but maintain the possibility for fast design change.
- Minimal HTML - in this case you try to only have a minimal amount of HTML, this style is good for rarely changing design.
- Other good practices, like HTML without tables.
- CSS - the only logical choice is using OOCSS (Object Oriented CSS), which simply applies good programming practices to the bad habits of writing inconsistent and not reusable CSS.
But the main thing you need to do is think before writing code! So things you should lookup are:
Good luck and back-end development article next time.
Learning: Image Manipulation and Design
The first thing you should focus on is learning the ABC of image manipulation and web design. They make up the biggest part of your product and are essential to your users, because no user interacts directly with the server side of your application.
Design is a good way to get the primary feel of the product without having to write a single line of code. If it seems bad on an image it will be worse when it is coded.
It is very important to know how to go from a design image to real code. From practice I found two approaches for moving from design to code:
- Designer’s view;
- Programmer’s view.
Designer’s view - means that the designer is 100% right and his idea should be carried out fully to the pixel. This way could be great if you have a designer who knows how web is created. But if your designer is an artist (most designers are artists) they have trouble following rules, have trouble with consistency, so if you follow their designs fully you lose any chance for optimization.
Programmer’s view - means that the designer does the idea, but the programmer chooses the right way to implement the solution, can do the minor changes to the design to fix its inconsistencies and change the design so it would be optimal and great. This way means sacrifices, to the artist, of some details he or she created.
I worked in both ways. I really prefer the second one.
You are maybe asking yourself “Why do you need to know all about web design and image manipulation?”. The answer is simple. It is the biggest part of creating web products. There are critics who argue that other parts are more important, but in my personal opinion a good idea becomes a good design and a good design becomes a good product. If you want a cheap and high quality product you must code it once. By coding I mean not only programming but also the whole life cycle of the feature.
Next article will give you some insight to good front end programming.
The learning series: Prologue
So we all know what IT specialist are forced to learn all their lives. So what to learn? Today I am a front-end developer so should I learn only front-end techniques? I guess not. Limiting yourself only to one or two technologies in my opinion makes you vulnerable to technology evolution.
My first professional experience came from working with ASP.NET. At first I thought this technology is dominating because you are able to create a cheaper product and bringing the software faster from requirements to realization. This is very good for business software but not for competitive publicly accessible software.
This is because of two main reasons:
- The costs;
- Final product quality.
The costs - if you try to count how much it would cost for you to scale the software if in a small chance your product would become popular, also even the startup costs are not so small and even if you have the money to use ASP.NET and then you rely on the properties which make software writing fast you probably pay a great deal in software quality.
But all in all ASP.NET is very nice to use and that makes it my choice for business software.
Java is good if you like cheaper software costs, but it mostly removes big startup costs, but still developers are highly paid as in ASP.NET. You will still end up using commercial servers for your web-based software. So in this case you most likely end up nearly with same result.
So what to choose if you want to become a web developer? What you need to learn to master the craft? This is my blog series in which I will try to give all of the answers you need. In these entries I want to share a little bit of my recent experience, combined with my older knowledge. So to become a web developer you need to master:
- Design and image manipulation;
- UI languages;
- Server side languages.
This is a first part of a series of entries about useful things to learn. Some things I am planning to include are: OOCSS, Ruby, Velocity, Java, ASP.NET, Photoshop, GIMP and other.