Why do programmers count from zero, when everyone else starts at 1?
Computer programming is all about efficiency, and even small improvements in efficiency can make big differences at scale.
And yes, counting from zero is slightly more efficient than starting at 1.
Let’s explore a simple mathematical equation to understand why:
If we count from zero, every value in the array of length N , can be represented by the following equation where i represents the numerical position of each value:
0 ≤ i < N
Our color array from before has 5 total values. If we were to take each value’s subscript (it’s numerical position in the array) and slot them into this equation:
‘blue’0 ≤ 0 < 5 // true! ‘yellow’ 0 ≤ 1 < 5 // also true! ‘white’ 0 ≤ 4 < 5 // yep, yep, true too!
Can we all agree that each of these equations is true?
Now, if we were to count from 1, every value in the array of length N could be represented by the following equation where i represents the numerical position of each value:
1 ≤ i < N + 1
So for a moment, let’s consider this alternative array of colors I don’t like, indexed by 1:
[‘beige’1, ‘orange’2, ‘green’3]
The equations now looks like so:
‘beige’1 ≤ 1 < 4 // true! ‘orange’ 1 ≤ 2 < 4 // huh, also true! ‘green’ 1 ≤ 3 < 4 // too true!
Those are also true, right? So what’s the problem?
The problem is found in the N + 1 part of the equation.
You see what that means is that in order for the computer to process the equation it has to find the length of the array and then add 1. Sure, it’s not a hard task (to add 1) but it is extra work that the computer doesn’t have to do when processing the former equation, and therefore, starting the count at zero wins!
Taken from here: Why programmers start counting at zero | Skillcrush