Our goal is to fill a two-dimensional array with 1’s.

```using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

namespace ToArrays
{
public class Program
{
public const int Size = 10000;

private static void Main()
{
BenchmarkRunner.Run<Program>();
}

[Benchmark]
public void IxJ()
{
var array = new int[Size, Size];
for (var i = 0; i < Size; i++)
{
for (var j = 0; j < Size; j++)
{
array[i, j] = 1;
}
}
}

[Benchmark]
public void JxI()
{
var array = new int[Size, Size];
for (var i = 0; i < Size; i++)
{
for (var j = 0; j < Size; j++)
{
array[j, i] = 1;
}
}
}
}
}
```

Could you say which version is faster and why?

### This Post Has 6 Comments

1. Yan Justino

Yan Justino Vou arriscar ðŸ˜€ Acredito que apesar dos dois algoritimos aparentemente terem a mesma complexidade (Î˜(nË†2)), a degradaÃ§Ã£o do segundo Ã© maior, uma vez que pra cada “i” eu preciso realizar uma operaÃ§Ã£o Î˜(n) no “j”, enquanto na primeira estratÃ©gia o custo Ã© pago na primeira iteraÃ§Ã£o de “i” e as demais eu tenho um custo de Î˜(1) para a sua leitura.

1. Elemar JÃºnior

I am not sure I got your point. ðŸ™‚

2. Diogo

I think that two functions are the same, doesn’t it?

1. Elemar JÃºnior

No. They are not.

3. Cleber Dantas

Nice question, indeed.

I didn’t run the code (yet), but probably the JxI method is slower.

It has to do with memory layout and how the 10000^2 allocations will be performed, it can goes basically two ways… like:

[0, 1]
[0, 2]
[0, 3]
[0, …]

or:

[1, 0]
[2, 0]
[3, 0]
[…, 0]

Depending on how memory is allocated by the runtime one of the two methods will be slower.

The fastest one will basically reads memory addresses ‘sequentially’ and the other will ‘jump’ 10000 addresses on each iteration.

The fastest one will have addresses nearby, the slower will not. The problem is not with the ‘jump’ per se, since accessing an array by its index is usually O(1).

The real problem is CPU cache, the slower method will perform a lot of cache misses, forcing the CPU to access RAM (which is slower than cache).