JavaScript: Why <= is slower than <?
up vote
9
down vote
favorite
I am reading the slides Breaking the Javascript Speed Limit with V8, and there is an example like the code below. I cannot figure out why <=
is slower than <
in this case, can anybody explain that? Any comments are appreciated.
<--slow--->
this.isPrimeDivisible = function(candidate) {
for (var i = 1; i <= this.prime_count; ++i) {
if (candidate % this.primes[i] == 0) return true;
}
return false;
}
(Hint: primes is an array of length prime_count)
<--faster-->
this.isPrimeDivisible = function(candidate) {
for (var i = 1; i < this.prime_count; ++i) {
if (candidate % this.primes[i] == 0) return true;
}
return false;
}
javascript v8
add a comment |
up vote
9
down vote
favorite
I am reading the slides Breaking the Javascript Speed Limit with V8, and there is an example like the code below. I cannot figure out why <=
is slower than <
in this case, can anybody explain that? Any comments are appreciated.
<--slow--->
this.isPrimeDivisible = function(candidate) {
for (var i = 1; i <= this.prime_count; ++i) {
if (candidate % this.primes[i] == 0) return true;
}
return false;
}
(Hint: primes is an array of length prime_count)
<--faster-->
this.isPrimeDivisible = function(candidate) {
for (var i = 1; i < this.prime_count; ++i) {
if (candidate % this.primes[i] == 0) return true;
}
return false;
}
javascript v8
1
The "slower" version iterates one more time than the "faster" version (wheni == this.prime_count
).
– TypeIA
4 hours ago
@DacreDenny The computational difficulty of<=
and<
is identical, both in theory and in actual implementation in all modern processors (and interpreters).
– TypeIA
4 hours ago
I have read the document, there is amain
code that call that function in a loop that runs25000
times, so you are doing a lot of less iterations overall doing that change. Also, if an array have a lenght of 5, trying to obtainarray[5]
will go outside his limit giving anundefined
value since arrays start indexing on0
.
– D. Smania
4 hours ago
add a comment |
up vote
9
down vote
favorite
up vote
9
down vote
favorite
I am reading the slides Breaking the Javascript Speed Limit with V8, and there is an example like the code below. I cannot figure out why <=
is slower than <
in this case, can anybody explain that? Any comments are appreciated.
<--slow--->
this.isPrimeDivisible = function(candidate) {
for (var i = 1; i <= this.prime_count; ++i) {
if (candidate % this.primes[i] == 0) return true;
}
return false;
}
(Hint: primes is an array of length prime_count)
<--faster-->
this.isPrimeDivisible = function(candidate) {
for (var i = 1; i < this.prime_count; ++i) {
if (candidate % this.primes[i] == 0) return true;
}
return false;
}
javascript v8
I am reading the slides Breaking the Javascript Speed Limit with V8, and there is an example like the code below. I cannot figure out why <=
is slower than <
in this case, can anybody explain that? Any comments are appreciated.
<--slow--->
this.isPrimeDivisible = function(candidate) {
for (var i = 1; i <= this.prime_count; ++i) {
if (candidate % this.primes[i] == 0) return true;
}
return false;
}
(Hint: primes is an array of length prime_count)
<--faster-->
this.isPrimeDivisible = function(candidate) {
for (var i = 1; i < this.prime_count; ++i) {
if (candidate % this.primes[i] == 0) return true;
}
return false;
}
javascript v8
javascript v8
edited 3 hours ago
Michael Geary
22.1k54058
22.1k54058
asked 4 hours ago
Leonardo Physh
585
585
1
The "slower" version iterates one more time than the "faster" version (wheni == this.prime_count
).
– TypeIA
4 hours ago
@DacreDenny The computational difficulty of<=
and<
is identical, both in theory and in actual implementation in all modern processors (and interpreters).
– TypeIA
4 hours ago
I have read the document, there is amain
code that call that function in a loop that runs25000
times, so you are doing a lot of less iterations overall doing that change. Also, if an array have a lenght of 5, trying to obtainarray[5]
will go outside his limit giving anundefined
value since arrays start indexing on0
.
– D. Smania
4 hours ago
add a comment |
1
The "slower" version iterates one more time than the "faster" version (wheni == this.prime_count
).
– TypeIA
4 hours ago
@DacreDenny The computational difficulty of<=
and<
is identical, both in theory and in actual implementation in all modern processors (and interpreters).
– TypeIA
4 hours ago
I have read the document, there is amain
code that call that function in a loop that runs25000
times, so you are doing a lot of less iterations overall doing that change. Also, if an array have a lenght of 5, trying to obtainarray[5]
will go outside his limit giving anundefined
value since arrays start indexing on0
.
– D. Smania
4 hours ago
1
1
The "slower" version iterates one more time than the "faster" version (when
i == this.prime_count
).– TypeIA
4 hours ago
The "slower" version iterates one more time than the "faster" version (when
i == this.prime_count
).– TypeIA
4 hours ago
@DacreDenny The computational difficulty of
<=
and <
is identical, both in theory and in actual implementation in all modern processors (and interpreters).– TypeIA
4 hours ago
@DacreDenny The computational difficulty of
<=
and <
is identical, both in theory and in actual implementation in all modern processors (and interpreters).– TypeIA
4 hours ago
I have read the document, there is a
main
code that call that function in a loop that runs 25000
times, so you are doing a lot of less iterations overall doing that change. Also, if an array have a lenght of 5, trying to obtain array[5]
will go outside his limit giving an undefined
value since arrays start indexing on 0
.– D. Smania
4 hours ago
I have read the document, there is a
main
code that call that function in a loop that runs 25000
times, so you are doing a lot of less iterations overall doing that change. Also, if an array have a lenght of 5, trying to obtain array[5]
will go outside his limit giving an undefined
value since arrays start indexing on 0
.– D. Smania
4 hours ago
add a comment |
2 Answers
2
active
oldest
votes
up vote
13
down vote
accepted
Other answers and comments mention that the difference between the two loops is that the first one executes one more iteration than the second one. This is true, but in an array that grows to 25,000 elements, one iteration more or less would only make a miniscule difference. As a ballpark guess, if we assume the average length as it grows is 12,500, then the difference we might expect should be around 1/12,500, or only 0.008%.
The performance difference here is much larger than would be explained by that one extra iteration, and the problem is explained near the end of the presentation.
this.primes
is a contiguous array (every element holds a value) and the elements are all numbers.
A JavaScript engine may optimize such an array to be an simple array of actual numbers, instead of an array of objects which happen to contain numbers but could contain other values or no value. The first format is much faster to access: it takes less code, and the array is much smaller so it will fit better in cache. But there are some conditions that may prevent this optimized format from being used.
One condition would be if some of the array elements are missing. For example:
let array = ;
a[0] = 10;
a[2] = 20;
Now what is the value of a[1]
? It has no value. (It isn't even correct to say it has the value undefined
- an array element containing the undefined
value is different from an array element that is missing entirely.)
There isn't a way to represent this with numbers only, so the JavaScript engine is forced to use the less optimized format. If a[1]
contained a numeric value like the other two elements, the array could potentially be optimized into an array of numbers only.
Another reason for an array to be forced into the deoptimized format can be if you attempt to access an element outside the bounds of the array, as discussed in the presentation.
The first loop with <=
attempts to read an element past the end of the array. This doesn't prevent the algorithm from working correctly, because in the last iteration:
this.primes[i]
evaluates toundefined
becausei
is past the array end.
candidate % undefined
(for any value ofcandidate
) evaluates toNaN
.
NaN == 0
evaluates tofalse
.- Therefore, the
return true
is not executed.
So the extra iteration has no effect on the logic. The code produces the same result as it would without the extra iteration.
But to get there, it tried to read a nonexistent element past the end of the array. This forces the array out of optimization - or at least did at the time of this talk.
The second loop with <
reads only elements that exist within the array, so it allows an optimized array and code.
The problem is described in pages 90-91 of the talk, with related discussion in the pages before and after that.
I happened to attend this very Google I/O presentation and talked with the speaker (one of the V8 authors) afterward. I had been using a technique in my own code that involved reading past the end of an array as a misguided (in hindsight) attempt to optimize one particular situation. He confirmed that if you tried to even read past the end of an array, it would prevent the simple optimized format from being used.
If what the V8 author said is still true, then reading past the end of the array would prevent it from being optimized and it would have to fall back to the slower format.
Now it's possible that V8 has been improved in the meantime to efficiently handle this case, or that other JavaScript engines handle it differently. I don't know one way or the other on that, but this deoptimization is what the presentation was talking about.
add a comment |
up vote
2
down vote
There is a for loop and if you loop i <= this.prime_count you are doing one extra iteration over i < this.prime_count as it only loops until i is less than prime_count rather than until i is equal to prime_count.
add a comment |
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
13
down vote
accepted
Other answers and comments mention that the difference between the two loops is that the first one executes one more iteration than the second one. This is true, but in an array that grows to 25,000 elements, one iteration more or less would only make a miniscule difference. As a ballpark guess, if we assume the average length as it grows is 12,500, then the difference we might expect should be around 1/12,500, or only 0.008%.
The performance difference here is much larger than would be explained by that one extra iteration, and the problem is explained near the end of the presentation.
this.primes
is a contiguous array (every element holds a value) and the elements are all numbers.
A JavaScript engine may optimize such an array to be an simple array of actual numbers, instead of an array of objects which happen to contain numbers but could contain other values or no value. The first format is much faster to access: it takes less code, and the array is much smaller so it will fit better in cache. But there are some conditions that may prevent this optimized format from being used.
One condition would be if some of the array elements are missing. For example:
let array = ;
a[0] = 10;
a[2] = 20;
Now what is the value of a[1]
? It has no value. (It isn't even correct to say it has the value undefined
- an array element containing the undefined
value is different from an array element that is missing entirely.)
There isn't a way to represent this with numbers only, so the JavaScript engine is forced to use the less optimized format. If a[1]
contained a numeric value like the other two elements, the array could potentially be optimized into an array of numbers only.
Another reason for an array to be forced into the deoptimized format can be if you attempt to access an element outside the bounds of the array, as discussed in the presentation.
The first loop with <=
attempts to read an element past the end of the array. This doesn't prevent the algorithm from working correctly, because in the last iteration:
this.primes[i]
evaluates toundefined
becausei
is past the array end.
candidate % undefined
(for any value ofcandidate
) evaluates toNaN
.
NaN == 0
evaluates tofalse
.- Therefore, the
return true
is not executed.
So the extra iteration has no effect on the logic. The code produces the same result as it would without the extra iteration.
But to get there, it tried to read a nonexistent element past the end of the array. This forces the array out of optimization - or at least did at the time of this talk.
The second loop with <
reads only elements that exist within the array, so it allows an optimized array and code.
The problem is described in pages 90-91 of the talk, with related discussion in the pages before and after that.
I happened to attend this very Google I/O presentation and talked with the speaker (one of the V8 authors) afterward. I had been using a technique in my own code that involved reading past the end of an array as a misguided (in hindsight) attempt to optimize one particular situation. He confirmed that if you tried to even read past the end of an array, it would prevent the simple optimized format from being used.
If what the V8 author said is still true, then reading past the end of the array would prevent it from being optimized and it would have to fall back to the slower format.
Now it's possible that V8 has been improved in the meantime to efficiently handle this case, or that other JavaScript engines handle it differently. I don't know one way or the other on that, but this deoptimization is what the presentation was talking about.
add a comment |
up vote
13
down vote
accepted
Other answers and comments mention that the difference between the two loops is that the first one executes one more iteration than the second one. This is true, but in an array that grows to 25,000 elements, one iteration more or less would only make a miniscule difference. As a ballpark guess, if we assume the average length as it grows is 12,500, then the difference we might expect should be around 1/12,500, or only 0.008%.
The performance difference here is much larger than would be explained by that one extra iteration, and the problem is explained near the end of the presentation.
this.primes
is a contiguous array (every element holds a value) and the elements are all numbers.
A JavaScript engine may optimize such an array to be an simple array of actual numbers, instead of an array of objects which happen to contain numbers but could contain other values or no value. The first format is much faster to access: it takes less code, and the array is much smaller so it will fit better in cache. But there are some conditions that may prevent this optimized format from being used.
One condition would be if some of the array elements are missing. For example:
let array = ;
a[0] = 10;
a[2] = 20;
Now what is the value of a[1]
? It has no value. (It isn't even correct to say it has the value undefined
- an array element containing the undefined
value is different from an array element that is missing entirely.)
There isn't a way to represent this with numbers only, so the JavaScript engine is forced to use the less optimized format. If a[1]
contained a numeric value like the other two elements, the array could potentially be optimized into an array of numbers only.
Another reason for an array to be forced into the deoptimized format can be if you attempt to access an element outside the bounds of the array, as discussed in the presentation.
The first loop with <=
attempts to read an element past the end of the array. This doesn't prevent the algorithm from working correctly, because in the last iteration:
this.primes[i]
evaluates toundefined
becausei
is past the array end.
candidate % undefined
(for any value ofcandidate
) evaluates toNaN
.
NaN == 0
evaluates tofalse
.- Therefore, the
return true
is not executed.
So the extra iteration has no effect on the logic. The code produces the same result as it would without the extra iteration.
But to get there, it tried to read a nonexistent element past the end of the array. This forces the array out of optimization - or at least did at the time of this talk.
The second loop with <
reads only elements that exist within the array, so it allows an optimized array and code.
The problem is described in pages 90-91 of the talk, with related discussion in the pages before and after that.
I happened to attend this very Google I/O presentation and talked with the speaker (one of the V8 authors) afterward. I had been using a technique in my own code that involved reading past the end of an array as a misguided (in hindsight) attempt to optimize one particular situation. He confirmed that if you tried to even read past the end of an array, it would prevent the simple optimized format from being used.
If what the V8 author said is still true, then reading past the end of the array would prevent it from being optimized and it would have to fall back to the slower format.
Now it's possible that V8 has been improved in the meantime to efficiently handle this case, or that other JavaScript engines handle it differently. I don't know one way or the other on that, but this deoptimization is what the presentation was talking about.
add a comment |
up vote
13
down vote
accepted
up vote
13
down vote
accepted
Other answers and comments mention that the difference between the two loops is that the first one executes one more iteration than the second one. This is true, but in an array that grows to 25,000 elements, one iteration more or less would only make a miniscule difference. As a ballpark guess, if we assume the average length as it grows is 12,500, then the difference we might expect should be around 1/12,500, or only 0.008%.
The performance difference here is much larger than would be explained by that one extra iteration, and the problem is explained near the end of the presentation.
this.primes
is a contiguous array (every element holds a value) and the elements are all numbers.
A JavaScript engine may optimize such an array to be an simple array of actual numbers, instead of an array of objects which happen to contain numbers but could contain other values or no value. The first format is much faster to access: it takes less code, and the array is much smaller so it will fit better in cache. But there are some conditions that may prevent this optimized format from being used.
One condition would be if some of the array elements are missing. For example:
let array = ;
a[0] = 10;
a[2] = 20;
Now what is the value of a[1]
? It has no value. (It isn't even correct to say it has the value undefined
- an array element containing the undefined
value is different from an array element that is missing entirely.)
There isn't a way to represent this with numbers only, so the JavaScript engine is forced to use the less optimized format. If a[1]
contained a numeric value like the other two elements, the array could potentially be optimized into an array of numbers only.
Another reason for an array to be forced into the deoptimized format can be if you attempt to access an element outside the bounds of the array, as discussed in the presentation.
The first loop with <=
attempts to read an element past the end of the array. This doesn't prevent the algorithm from working correctly, because in the last iteration:
this.primes[i]
evaluates toundefined
becausei
is past the array end.
candidate % undefined
(for any value ofcandidate
) evaluates toNaN
.
NaN == 0
evaluates tofalse
.- Therefore, the
return true
is not executed.
So the extra iteration has no effect on the logic. The code produces the same result as it would without the extra iteration.
But to get there, it tried to read a nonexistent element past the end of the array. This forces the array out of optimization - or at least did at the time of this talk.
The second loop with <
reads only elements that exist within the array, so it allows an optimized array and code.
The problem is described in pages 90-91 of the talk, with related discussion in the pages before and after that.
I happened to attend this very Google I/O presentation and talked with the speaker (one of the V8 authors) afterward. I had been using a technique in my own code that involved reading past the end of an array as a misguided (in hindsight) attempt to optimize one particular situation. He confirmed that if you tried to even read past the end of an array, it would prevent the simple optimized format from being used.
If what the V8 author said is still true, then reading past the end of the array would prevent it from being optimized and it would have to fall back to the slower format.
Now it's possible that V8 has been improved in the meantime to efficiently handle this case, or that other JavaScript engines handle it differently. I don't know one way or the other on that, but this deoptimization is what the presentation was talking about.
Other answers and comments mention that the difference between the two loops is that the first one executes one more iteration than the second one. This is true, but in an array that grows to 25,000 elements, one iteration more or less would only make a miniscule difference. As a ballpark guess, if we assume the average length as it grows is 12,500, then the difference we might expect should be around 1/12,500, or only 0.008%.
The performance difference here is much larger than would be explained by that one extra iteration, and the problem is explained near the end of the presentation.
this.primes
is a contiguous array (every element holds a value) and the elements are all numbers.
A JavaScript engine may optimize such an array to be an simple array of actual numbers, instead of an array of objects which happen to contain numbers but could contain other values or no value. The first format is much faster to access: it takes less code, and the array is much smaller so it will fit better in cache. But there are some conditions that may prevent this optimized format from being used.
One condition would be if some of the array elements are missing. For example:
let array = ;
a[0] = 10;
a[2] = 20;
Now what is the value of a[1]
? It has no value. (It isn't even correct to say it has the value undefined
- an array element containing the undefined
value is different from an array element that is missing entirely.)
There isn't a way to represent this with numbers only, so the JavaScript engine is forced to use the less optimized format. If a[1]
contained a numeric value like the other two elements, the array could potentially be optimized into an array of numbers only.
Another reason for an array to be forced into the deoptimized format can be if you attempt to access an element outside the bounds of the array, as discussed in the presentation.
The first loop with <=
attempts to read an element past the end of the array. This doesn't prevent the algorithm from working correctly, because in the last iteration:
this.primes[i]
evaluates toundefined
becausei
is past the array end.
candidate % undefined
(for any value ofcandidate
) evaluates toNaN
.
NaN == 0
evaluates tofalse
.- Therefore, the
return true
is not executed.
So the extra iteration has no effect on the logic. The code produces the same result as it would without the extra iteration.
But to get there, it tried to read a nonexistent element past the end of the array. This forces the array out of optimization - or at least did at the time of this talk.
The second loop with <
reads only elements that exist within the array, so it allows an optimized array and code.
The problem is described in pages 90-91 of the talk, with related discussion in the pages before and after that.
I happened to attend this very Google I/O presentation and talked with the speaker (one of the V8 authors) afterward. I had been using a technique in my own code that involved reading past the end of an array as a misguided (in hindsight) attempt to optimize one particular situation. He confirmed that if you tried to even read past the end of an array, it would prevent the simple optimized format from being used.
If what the V8 author said is still true, then reading past the end of the array would prevent it from being optimized and it would have to fall back to the slower format.
Now it's possible that V8 has been improved in the meantime to efficiently handle this case, or that other JavaScript engines handle it differently. I don't know one way or the other on that, but this deoptimization is what the presentation was talking about.
edited 2 hours ago
answered 4 hours ago
Michael Geary
22.1k54058
22.1k54058
add a comment |
add a comment |
up vote
2
down vote
There is a for loop and if you loop i <= this.prime_count you are doing one extra iteration over i < this.prime_count as it only loops until i is less than prime_count rather than until i is equal to prime_count.
add a comment |
up vote
2
down vote
There is a for loop and if you loop i <= this.prime_count you are doing one extra iteration over i < this.prime_count as it only loops until i is less than prime_count rather than until i is equal to prime_count.
add a comment |
up vote
2
down vote
up vote
2
down vote
There is a for loop and if you loop i <= this.prime_count you are doing one extra iteration over i < this.prime_count as it only loops until i is less than prime_count rather than until i is equal to prime_count.
There is a for loop and if you loop i <= this.prime_count you are doing one extra iteration over i < this.prime_count as it only loops until i is less than prime_count rather than until i is equal to prime_count.
answered 4 hours ago
Adrian Brand
2,67311018
2,67311018
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53643962%2fjavascript-why-is-slower-than%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
The "slower" version iterates one more time than the "faster" version (when
i == this.prime_count
).– TypeIA
4 hours ago
@DacreDenny The computational difficulty of
<=
and<
is identical, both in theory and in actual implementation in all modern processors (and interpreters).– TypeIA
4 hours ago
I have read the document, there is a
main
code that call that function in a loop that runs25000
times, so you are doing a lot of less iterations overall doing that change. Also, if an array have a lenght of 5, trying to obtainarray[5]
will go outside his limit giving anundefined
value since arrays start indexing on0
.– D. Smania
4 hours ago