-
Notifications
You must be signed in to change notification settings - Fork 29
VirtIO 1.0 and multi-queue support #998
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
3a48da6 to
e7b1927
Compare
e7b1927 to
59f9711
Compare
citrus-it
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've reviewed this mostly from a VirtIO/viona perspective.
| #[derive(Default)] | ||
| pub struct vioc_intr_poll { | ||
| pub vip_nrings: u16, | ||
| pub vip_status: [u32; howmany(VIONA_MAX_QPAIRS, 32)], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since you have the padding explicitly shown in other structures here do you want to do the same for the hole here?
|
|
||
| #[repr(C)] | ||
| #[derive(Default)] | ||
| pub struct vioc_nofity_mmio { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| pub struct vioc_nofity_mmio { | |
| pub struct vioc_notify_mmio { |
| } | ||
|
|
||
| pub const VIONA_MIN_QPAIRS: usize = 1; | ||
| pub const VIONA_MAX_QPAIRS: usize = 0x100; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps a comment here to say that this limit comes from viona and is deliberately far lower than that allowed by the spec (0x8000)? 256 queue pairs should be enough for anyone.. or something
| | VNA_IOC_GET_MTU | ||
| | VNA_IOC_SET_MTU | ||
| | VNA_IOC_SET_PAIRS | ||
| | VNA_IOC_SET_USEPAIRS, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both GET_PAIRS and GET_USEPAIRS should also be safe in terms of not requiring copyin/copyout.
I appreciate they are probably not used.
| /// polling. | ||
| V6 = 6, | ||
|
|
||
| /// Adds support for VirtIO 1.2 (modern) virtqueues. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Technically modern queues came in with VirtIO 1.0
| CommonConfigReg::QueueSize => { | ||
| let state = self.state.lock().unwrap(); | ||
| let offered = | ||
| VqSize::try_from(wo.read_u16()).expect("bad queue size"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If a guest sends a nonsensical queue size here (such as 0 or something which is not a power of two) we probably ought not panic.
| CommonConfigReg::QueueNotifyData => { | ||
| // Read-only for driver | ||
| } | ||
| CommonConfigReg::QueueReset => self.set_needs_reset(dev), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a register for per-queue reset, not entire device. Since we aren't advertising VIRTIO_F_RING_RESET we never expect to see it and setting the reset-required bit is probably not an unreasonable response!
| // logic) is kept well aware | ||
| // interested in the placement of those BARs (such as the notify | ||
| // logic) is configured properly. | ||
| self.bar_update(ps.bar(pci::BarN::BAR0).unwrap()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although we aren't using it yet, the BAR creation logic only creates BAR0 if "Legacy" or "Transitional" mode was selected. We probably need the same existence check as for BAR2 here.
| used.used_idx = Wrapping(info.used_idx); | ||
| } | ||
|
|
||
| /// Accummulates a sequence of available descrptors into a `Chain`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| /// Accummulates a sequence of available descrptors into a `Chain`. | |
| /// Accummulates a sequence of available descriptors into a `Chain`. |
| } | ||
| } | ||
|
|
||
| impl MigrateMulti for PciVirtioViona { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems we'll need to push at least the number of selected queue pairs into the migration state here.
No description provided.