diff options
author | 2017-03-19 13:54:09 -0700 | |
---|---|---|
committer | 2017-03-19 13:54:09 -0700 | |
commit | 5a265cc8ebef1614581661aa77b9290de3a493e1 (patch) | |
tree | 49bca460323fe2dfe8b50787400399e426e4c05e /src/buf/buf_mut.rs | |
parent | 4fe4e9429a9ad6ec51060c35a382b69651691f8d (diff) | |
download | bytes-5a265cc8ebef1614581661aa77b9290de3a493e1.tar.gz bytes-5a265cc8ebef1614581661aa77b9290de3a493e1.tar.zst bytes-5a265cc8ebef1614581661aa77b9290de3a493e1.zip |
Add inline attributes to Vec's MutBuf methods (#80)
I found this significantly improved a
[benchmark](https://gist.github.com/danburkert/34a7d6680d97bc86dca7f396eb8d0abf)
which calls `bytes_mut`, writes 1 byte, and advances the pointer with
`advance_mut` in a pretty tight loop. In particular, it seems to be the
inline annotation on `bytes_mut` which had the most effect. I also took
the opportunity to simplify the bounds checking in advance_mut.
before:
```
test encode_varint_small ... bench: 540 ns/iter (+/- 85) = 1481 MB/s
```
after:
```
test encode_varint_small ... bench: 422 ns/iter (+/- 24) = 1895 MB/s
```
As you can see, the variance is also significantly improved.
Interestingly, I tried to change the last statement in `bytes_mut` from
```
&mut slice::from_raw_parts_mut(ptr, cap)[len..]
```
to
```
slice::from_raw_parts_mut(ptr.offset(len as isize), cap - len)
```
but, this caused a very measurable perf regression (almost completely
negating the gains from marking bytes_mut inline).
Diffstat (limited to '')
-rw-r--r-- | src/buf/buf_mut.rs | 18 |
1 files changed, 10 insertions, 8 deletions
diff --git a/src/buf/buf_mut.rs b/src/buf/buf_mut.rs index 848a30a..91eacbf 100644 --- a/src/buf/buf_mut.rs +++ b/src/buf/buf_mut.rs @@ -699,23 +699,25 @@ impl<T: AsMut<[u8]> + AsRef<[u8]>> BufMut for io::Cursor<T> { } impl BufMut for Vec<u8> { + #[inline] fn remaining_mut(&self) -> usize { usize::MAX - self.len() } + #[inline] unsafe fn advance_mut(&mut self, cnt: usize) { - let cap = self.capacity(); - let len = self.len().checked_add(cnt) - .expect("overflow"); - - if len > cap { - // Reserve additional - self.reserve(cap - len); + let len = self.len(); + let remaining = self.capacity() - len; + if cnt > remaining { + // Reserve additional capacity, and ensure that the total length + // will not overflow usize. + self.reserve(cnt - remaining); } - self.set_len(len); + self.set_len(len + cnt); } + #[inline] unsafe fn bytes_mut(&mut self) -> &mut [u8] { use std::slice; |