You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Nov 4, 2024. It is now read-only.
Copy file name to clipboardExpand all lines: getting-started-guide/creating-development-environments/virtualization/qemu.md
+26-26
Original file line number
Diff line number
Diff line change
@@ -27,16 +27,16 @@ $ dnf install @virtualization
27
27
28
28
{% tab title="RHEL/CentOS" %}
29
29
{% hint style="info" %}
30
-
Persistent Memory/NVDIMM support was introduced in to QEMU 2.6.0. See the qemu documentation here - [https://github.com/qemu/qemu/blob/master/docs/nvdimm.txt](https://github.com/qemu/qemu/blob/master/docs/nvdimm.txt).
30
+
Persistent Memory/NVDIMM support was introduced in to QEMU 2.6.0. See the qemu documentation here - [https://github.com/qemu/qemu/blob/master/docs/nvdimm.txt](https://github.com/qemu/qemu/blob/master/docs/nvdimm.txt).
31
31
32
-
The version of qemu provided in the CentOS 7.x package repository is v1.5.0 and therefore will not support Persistent Memory/NVDIMMs.
32
+
The version of qemu provided in the CentOS 7.x package repository is v1.5.0 and therefore will not support Persistent Memory/NVDIMMs.
33
33
34
34
It is recommended to download and build the latest QEMU source code from [https://www.qemu.org/download/\#source](https://www.qemu.org/download/#source).
35
35
{% endhint %}
36
36
37
37
## Installing QEMU from source code
38
38
39
-
Download and build the latest QEMU source code from [https://www.qemu.org/download/\#source](https://www.qemu.org/download/#source). We use QEMU 4.0.0 as an example:
39
+
Download and build the latest QEMU source code from [https://www.qemu.org/download/\#source](https://www.qemu.org/download/#source). We use QEMU 4.0.0 as an example:
ERROR: glib-2.40 gthread-2.0 is required to compile QEMU
57
-
58
57
```
59
58
60
59
The solution is to install `gtk2-devel`:
@@ -67,7 +66,7 @@ You can now re-execute the `configure` command and it should work now
67
66
{% endhint %}
68
67
69
68
{% hint style="info" %}
70
-
**Note 2:** The `--enable-libpmem` option for `configure` requires that `libpmem` from the Persistent Memory Development Kit \(PMDK\) be installed as a prerequisite. See [Installing PMDK](../../installing-pmdk/) for instructions.
69
+
**Note 2:** The `--enable-libpmem` option for `configure` requires that `libpmem` from the Persistent Memory Development Kit \(PMDK\) be installed as a prerequisite. See [Installing PMDK](../../installing-pmdk/) for instructions.
71
70
{% endhint %}
72
71
{% endtab %}
73
72
@@ -118,35 +117,35 @@ Where,
118
117
119
118
*`maxmem=$MAX_SIZE` should be equal to or larger than the total size
120
119
121
-
of normal RAM devices and vNVDIMM devices, e.g. $MAX\_SIZE should be
120
+
of normal RAM devices and vNVDIMM devices, e.g. $MAX\_SIZE should be
122
121
123
-
>= $RAM\_SIZE + $NVDIMM\_SIZE here.
122
+
>= $RAM\_SIZE + $NVDIMM\_SIZE here.
124
123
125
124
*`object memory-backend-file,id=mem1,share=on,mem-path=$PATH,size=$NVDIMM_SIZE` creates a backend storage of size `$NVDIMM_SIZE` on a file `$PATH`. All
126
125
127
-
accesses to the virtual NVDIMM device go to the file `$PATH`.
126
+
accesses to the virtual NVDIMM device go to the file `$PATH`.
128
127
129
128
*`share=on/off` controls the visibility of guest writes. If
130
129
131
-
`share=on`, then guest writes will be applied to the backend
130
+
`share=on`, then guest writes will be applied to the backend
132
131
133
-
file. If another guest uses the same backend file with option
132
+
file. If another guest uses the same backend file with option
134
133
135
-
`share=on`, then above writes will be visible to it as well. If
134
+
`share=on`, then above writes will be visible to it as well. If
136
135
137
-
`share=off`, then guest writes won't be applied to the backend
136
+
`share=off`, then guest writes won't be applied to the backend
138
137
139
-
file and thus will be invisible to other guests.
138
+
file and thus will be invisible to other guests.
140
139
141
140
*`device nvdimm,id=nvdimm1,memdev=mem1` creates a virtual NVDIMM
142
141
143
-
device whose storage is provided by above memory backend device.
142
+
device whose storage is provided by above memory backend device.
144
143
145
144
Multiple vNVDIMM devices can be created if multiple pairs of `-object` and `-device` are provided. See Example 1 below.
146
145
147
146
### **Creating a Guest with Two Emulated vNVDIMMs**
148
147
149
-
The following example creates a Fedora 27 guest with two 4GiB vNVDIMMs, 4GiB of DDR Memory, 4 vCPUs, a VNC Server on port 0 for console access, and ssh traffic redirected from port 2222 on the host to port 22 in the guest for direct ssh access from a remote system.
148
+
The following example creates a Fedora 27 guest with two 4GiB vNVDIMMs, 4GiB of DDR Memory, 4 vCPUs, a VNC Server on port 0 for console access, and ssh traffic redirected from port 2222 on the host to port 22 in the guest for direct ssh access from a remote system.
150
149
151
150
```text
152
151
# sudo qemu-img create -f raw /virtual-machines/qemu/fedora27.raw 20G
@@ -167,7 +166,7 @@ The following example creates a Fedora 27 guest with two 4GiB vNVDIMMs, 4GiB of
167
166
168
167
For a detailed description of the options shown above, and many others, refer to the [QEMU User's Guide](https://qemu.weilnetz.de/doc/qemu-doc.html).
169
168
170
-
When creating a brand new QEMU guest with a blank OS disk image file, an ISO will need to be presented to the guest from which the OS can then be installed. A guest can access a local or remote ISO using:
169
+
When creating a brand new QEMU guest with a blank OS disk image file, an ISO will need to be presented to the guest from which the OS can then be installed. A guest can access a local or remote ISO using:
171
170
172
171
Local ISO:
173
172
@@ -183,7 +182,7 @@ Remote ISO:
183
182
184
183
#### Open Firewall Ports
185
184
186
-
To access the guests remotely, the firewall on the host system needs to be opened to allow remote access for VNC and SSH. In the following examples, we open a range of ports to accommodate several guests. You only need to open the ports for the number of guests you plan to use.
185
+
To access the guests remotely, the firewall on the host system needs to be opened to allow remote access for VNC and SSH. In the following examples, we open a range of ports to accommodate several guests. You only need to open the ports for the number of guests you plan to use.
Use a VNC Viewer to access the console to complete the OS installation and access the guest. The default VNC port starts at 5900. The example used`-vnc :0` which equates to port 5900.
215
+
Use a VNC Viewer to access the console to complete the OS installation and access the guest. The default VNC port starts at 5900. The example used`-vnc :0` which equates to port 5900.
217
216
218
-
Additionally you can connect to the guest once the operating system has been configured via SSH. Specify the username configured during the guest OS installation process and the hostname or IP address of the host system, eg:
217
+
Additionally you can connect to the guest once the operating system has been configured via SSH. Specify the username configured during the guest OS installation process and the hostname or IP address of the host system, eg:
219
218
220
219
```text
221
220
# ssh user@hostname -p 2222
@@ -262,15 +261,15 @@ For example, the following commands add another 4GB vNVDIMM device to the guest
262
261
263
262
Each hotplugged vNVDIMM device consumes one memory slot. Users should always ensure the memory option "-m ...,slots=N" specifies enough number of slots, i.e.
264
263
265
-
N >= number of RAM devices +
266
-
number of statically plugged vNVDIMM devices +
267
-
number of hotplugged vNVDIMM devices
264
+
N >= number of RAM devices +
265
+
number of statically plugged vNVDIMM devices +
266
+
number of hotplugged vNVDIMM devices
268
267
269
268
The similar is required for the memory option "-m ...,maxmem=M", i.e.
270
269
271
270
M >= size of RAM devices +
272
-
size of statically plugged vNVDIMM devices +
273
-
size of hotplugged vNVDIMM devices
271
+
size of statically plugged vNVDIMM devices +
272
+
size of hotplugged vNVDIMM devices
274
273
275
274
More detailed information about the HotPlug feature can be found in the [QEMU Memory HotPlug Documentation](https://github.com/qemu/qemu/blob/master/docs/memory-hotplug.txt).
276
275
@@ -289,7 +288,7 @@ For example, device dax require the 2 MB alignment, so we can use following QEMU
289
288
290
289
Though QEMU supports multiple types of vNVDIMM backends on Linux, the only backend that can guarantee the guest write persistence is:
291
290
292
-
A. DAX device \(e.g., /dev/dax0.0\) or
291
+
A. DAX device \(e.g., /dev/dax0.0\) or
293
292
B. DAX file \(mounted with the `-o dax` option\)
294
293
295
294
When using B \(A file supporting direct mapping of persistent memory\) as a backend, write persistence is guaranteed if the host kernel has support for the MAP\_SYNC flag in the `mmap` system call \(available since Linux 4.15 and on certain distro kernels\) and additionally both 'pmem' and 'share' flags are set to 'on' on the backend.
@@ -319,4 +318,5 @@ If the vNVDIMM backend is in host persistent memory that can be accessed in [SNI
319
318
320
319
## CPUID Flags
321
320
322
-
By default, qemu claims the guest machine supports only `clflush`. As any modern machine (starting with Skylake and Pinnacle Ridge) has `clflushopt` or `clwb` (Cannon Lake), you can significantly improve performance by passing a `-cpu` flag to qemu. Unless you require live migrating, `-cpu host` is a good choice.
321
+
By default, qemu claims the guest machine supports only `clflush`. As any modern machine \(starting with Skylake and Pinnacle Ridge\) has `clflushopt` or `clwb`\(Cannon Lake\), you can significantly improve performance by passing a `-cpu` flag to qemu. Unless you require live migrating, `-cpu host` is a good choice.
Copy file name to clipboardExpand all lines: getting-started-guide/installing-pmdk/compiling-pmdk-from-source.md
+8-8
Original file line number
Diff line number
Diff line change
@@ -1,11 +1,11 @@
1
-
# Installing PMDK from Source on Linux and FreeBSD
1
+
# Installing PMDK from Source on Linux
2
2
3
3
## Overview
4
4
5
-
This procedure describes how to clone the source code from the pmdk github repository and compile, then install it.
5
+
This procedure describes how to clone the source code from the pmdk github repository and compile, then install it.
6
6
7
7
{% hint style="info" %}
8
-
**Note:** We recommend [installing NDCTL](../installing-ndctl.md) first so PMDK builds all features. If the ndctl development packages and header files are not installed, PMDK will build successfully, but will disable some of the RAS \(Reliability, Availability and Serviceability\) features.
8
+
**Note:** We recommend [installing NDCTL](../installing-ndctl.md) first so PMDK builds all features. If the ndctl development packages and header files are not installed, PMDK will build successfully, but will disable some of the RAS \(Reliability, Availability and Serviceability\) features.
9
9
{% endhint %}
10
10
11
11
If your system is behind a firewall and requires a proxy to access the Internet, configure your package manager to use a proxy.
@@ -26,7 +26,7 @@ To build the PMDK libraries on Linux, you may need to install the following requ
**For Ubuntu 16.04 \(Xenial\) and Debian 8 \(Jessie\):**
61
+
**For Ubuntu 16.04 \(Xenial\) and Debian 8 \(Jessie\):**
62
62
63
63
{% hint style="info" %}
64
-
Earlier releases of Ubuntu and Debian do not have libfabric-dev available in the repository. If this library is required, you should compile it yourself. See [https://github.com/ofiwg/libfabric](https://github.com/ofiwg/libfabric)
64
+
Earlier releases of Ubuntu and Debian do not have libfabric-dev available in the repository. If this library is required, you should compile it yourself. See [https://github.com/ofiwg/libfabric](https://github.com/ofiwg/libfabric)
65
65
{% endhint %}
66
66
67
67
```text
@@ -213,13 +213,13 @@ To install this library into other locations, you can use the `prefix=path` opti
213
213
$ sudo make install prefix=/usr
214
214
```
215
215
216
-
If you installed to non-standard directory (anything other than /usr) you may need to add $prefix/lib or $prefix/lib64 (depending on the distribution you use) to the list of directories searched by the linker:
216
+
If you installed to non-standard directory \(anything other than /usr\) you may need to add $prefix/lib or $prefix/lib64 \(depending on the distribution you use\) to the list of directories searched by the linker:
217
217
218
218
```text
219
219
sudo sh -c "echo /usr/local/lib >> /etc/ld.so.conf"
220
220
sudo sh -c "echo /usr/local/lib64 >> /etc/ld.so.conf"
`ipmctl` is an open source utility created and maintained by Intel to manage Intel® Optane™ DC persistent memory modules. `ipmctl`, which works in both Linux and Windows, is a vendor specific tool, meaning it is not able to be used for managing NVDIMMs from vendors other than Intel. The full project is open source and can be seen on [GitHub](https://github.com/intel/ipmctl). In this guide we will refer to Intel® Optane™ DC memory modules simply as _modules_ or the _persistent memory modules_.
6
+
7
+
`ipmctl` refers to the following interface components:
8
+
9
+
*`libipmctl`: An Application Programming Interface \(API\) library for managing persistent memory modules.
10
+
*`ipmctl`: A Command Line Interface \(CLI\) application for configuring and managing persistent memory modules from the command line.
11
+
*`ipmctl-monitor`: A monitor daemon/system service for monitoring the health and status of persistent memory modules.
12
+
13
+
Functionality includes:
14
+
15
+
* Discover Intel Optane DC memory modules on the platform
16
+
* Provision the platform memory configuration
17
+
* Learn more about operating modes in this [video](link)
The `ipmctl` utility has many options. A complete list of commands can be shown by executing `ipmctl` with no arguments, `ipmctl help`, or reading the `ipmctl(1)` man page. Running `ipmctl`requires root privileges. `ipmctl` can also be used from UEFI. Namespaces are created using `ipmctl` at UEFI level or the [ndctl utility](../getting-started-guide/what-is-ndctl.md).
4
+
5
+
Usage:
6
+
7
+
```text
8
+
ipmctl COMMAND [OPTIONS] [TARGETS] [PROPERTIES]
9
+
```
10
+
11
+
Items in square brackets `[..]` are optional. Options, targets, and property values are separated by a pipe `|` meaning "or", and the default value is italicized. Items in parenthesis `(..)` indicate a user supplied value.
12
+
13
+
`ipmctl` commands include:
14
+
15
+
* help
16
+
* load
17
+
* set
18
+
* delete
19
+
* show
20
+
* create
21
+
* dump
22
+
* start
23
+
24
+
More information can be shown for each command using the `-verbose` flag, which is helpful for debugging.
When `Config` is specified, the `Current`, `Input Data Size`, `Output Data Size` and `Start Offset` values in the Configuration header are set to zero, making those tables invalid. This action can be useful before moving modules from one system to another, as goal creation rules may restrict provisioning dimms with an existing configuration.
4
+
5
+
> Warning: This command may result in data loss. Data should be backed up to other storage before executing this command.
*`-dimm (DimmIDs)`: Deletes the PCD data on specific persistent memory modules by supplying the DIMM target and one or more comma-separated DimmIDs. The default is to delete the PCD data for all manageable modules.
14
+
*`-pcd Config`: Clears the configuration management information
15
+
16
+
### **Examples**
17
+
18
+
Clear the Cin, Cout, Ccur tables from all manageable modules
19
+
20
+
```text
21
+
$ delete -dimm -pcd Config
22
+
```
23
+
24
+
### **Limitations**
25
+
26
+
The specified module\(s\) must be manageable by the host software, and if data-at-rest security is enabled, the modules must be unlocked. Any existing namespaces associated with the requested module\(s\) should be deleted before running this command.
0 commit comments