diff options
author | 2017-03-19 13:54:09 -0700 | |
---|---|---|
committer | 2017-03-19 13:54:09 -0700 | |
commit | 5a265cc8ebef1614581661aa77b9290de3a493e1 (patch) | |
tree | 49bca460323fe2dfe8b50787400399e426e4c05e /tests | |
parent | 4fe4e9429a9ad6ec51060c35a382b69651691f8d (diff) | |
download | bytes-5a265cc8ebef1614581661aa77b9290de3a493e1.tar.gz bytes-5a265cc8ebef1614581661aa77b9290de3a493e1.tar.zst bytes-5a265cc8ebef1614581661aa77b9290de3a493e1.zip |
Add inline attributes to Vec's MutBuf methods (#80)
I found this significantly improved a
[benchmark](https://gist.github.com/danburkert/34a7d6680d97bc86dca7f396eb8d0abf)
which calls `bytes_mut`, writes 1 byte, and advances the pointer with
`advance_mut` in a pretty tight loop. In particular, it seems to be the
inline annotation on `bytes_mut` which had the most effect. I also took
the opportunity to simplify the bounds checking in advance_mut.
before:
```
test encode_varint_small ... bench: 540 ns/iter (+/- 85) = 1481 MB/s
```
after:
```
test encode_varint_small ... bench: 422 ns/iter (+/- 24) = 1895 MB/s
```
As you can see, the variance is also significantly improved.
Interestingly, I tried to change the last statement in `bytes_mut` from
```
&mut slice::from_raw_parts_mut(ptr, cap)[len..]
```
to
```
slice::from_raw_parts_mut(ptr.offset(len as isize), cap - len)
```
but, this caused a very measurable perf regression (almost completely
negating the gains from marking bytes_mut inline).
Diffstat (limited to '')
0 files changed, 0 insertions, 0 deletions