void
pmap_init(
void
)
void
pmap_virtual_space(
vaddr_t *vstartp
, vaddr_t *vendp
)
vaddr_t
pmap_steal_memory(
vsize_t size
, vaddr_t *vstartp
, vaddr_t *vendp
)
pmap_t
pmap_kernel(
void
)
pmap_t
pmap_create(
void
)
void
pmap_destroy(
pmap_t pmap
)
void
pmap_reference(
pmap_t pmap
)
void
pmap_fork(
pmap_t src_map
, pmap_t dst_map
)
long
pmap_resident_count(
pmap_t pmap
)
long
pmap_wired_count(
pmap_t pmap
)
vaddr_t
pmap_growkernel(
vaddr_t maxkvaddr
)
int
pmap_enter(
pmap_t pmap
, vaddr_t va
, paddr_t pa
, vm_prot_t prot
, int flags
)
void
pmap_remove(
pmap_t pmap
, vaddr_t sva
, vaddr_t eva
)
void
pmap_remove_all(
pmap_t pmap
)
void
pmap_protect(
pmap_t pmap
, vaddr_t sva
, vaddr_t eva
, vm_prot_t prot
)
void
pmap_unwire(
pmap_t pmap
, vaddr_t va
)
bool
pmap_extract(
pmap_t pmap
, vaddr_t va
, paddr_t *pap
)
void
pmap_kenter_pa(
vaddr_t va
, paddr_t pa
, vm_prot_t prot
)
void
pmap_kremove(
vaddr_t va
, vsize_t size
)
void
pmap_copy(
pmap_t dst_map
, pmap_t src_map
, vaddr_t dst_addr
, vsize_t len
, vaddr_t src_addr
)
void
pmap_collect(
pmap_t pmap
)
void
pmap_update(
pmap_t pmap
)
void
pmap_activate(
struct lwp *l
)
void
pmap_deactivate(
struct lwp *l
)
void
pmap_zero_page(
paddr_t pa
)
void
pmap_copy_page(
paddr_t src
, paddr_t dst
)
void
pmap_page_protect(
struct vm_page *pg
, vm_prot_t prot
)
bool
pmap_clear_modify(
struct vm_page *pg
)
bool
pmap_clear_reference(
struct vm_page *pg
)
bool
pmap_is_modified(
struct vm_page *pg
)
bool
pmap_is_referenced(
struct vm_page *pg
)
paddr_t
pmap_phys_address(
paddr_t cookie
)
vaddr_t
PMAP_MAP_POOLPAGE(
paddr_t pa
)
paddr_t
PMAP_UNMAP_POOLPAGE(
vaddr_t va
)
void
PMAP_PREFER(
vaddr_t hint
, vaddr_t *vap
, vsize_t sz
, int td
)
In order to cope with hardware architectures that make the
invalidation of virtual address mappings expensive (e.g.,
TLB invalidations, TLB shootdown operations for multiple
processors), the
pmap
module is allowed to delay mapping invalidation or protection
operations until such time as they are actually necessary.
The functions that are allowed to delay such actions are
pmap_enter(),
pmap_remove(
),
pmap_protect(
),
pmap_kenter_pa(
),
and
pmap_kremove(
).
Callers of these functions must use the
pmap_update(
)
function to notify the
pmap
module that the mappings need to be made correct.
Since the
pmap
module is provided with information as to which processors are
using a given physical map, the
pmap
module may use whatever optimizations it has available to reduce
the expense of virtual-to-physical mapping synchronization.
machine/pmap.h
>.
This file contains the definition of the
pmap
structure:
struct pmap {
/* Contents defined by pmap implementation. */
};
typedef struct pmap *pmap_t;
This header file may also define other data structures that the pmap implementation uses.
Note that all prototypes for
pmap
interface functions are provided by the header file
<uvm/uvm_pmap.h
>.
It is possible to override this behavior by defining the
C pre-processor macro
PMAP_EXCLUDE_DECLS
.
This may be used to add a layer of indirection to
pmap
API calls, for handling different MMU types in a single
pmap
module, for example.
If the
PMAP_EXCLUDE_DECLS
macro is defined,
<machine/pmap.h
>
must
provide function prototypes in a block like so:
#ifdef _KERNEL /* not exposed to user namespace */
__BEGIN_DECLS /* make safe for C++ */
/* Prototypes go here. */
__END_DECLS
#endif /* _KERNEL */
The header file
<uvm/uvm_pmap.h
>
defines a structure for tracking
pmap
statistics (see below).
This structure is defined as:
struct pmap_statistics {
long resident_count; /* number of mapped pages */
long wired_count; /* number of wired pages */
};
Many CPUs provide hardware support for tracking modified/referenced information. However, many CPUs, particularly modern RISC CPUs, do not. On CPUs which lack hardware support for modified/referenced tracking, the pmap module must emulate it in software. There are several strategies for doing this, and the best strategy depends on the CPU.
The
``referenced''
attribute is used by the pagedaemon to determine if a page is
``active''.
Active pages are not candidates for re-use in the page replacement algorithm.
Accurate referenced information is not required for correct operation; if
supplying referenced information for a page is not feasible, then the
pmap
implementation should always consider the
``referenced''
attribute to be
false
.
The ``modified'' attribute is used by the pagedaemon to determine if a page needs to be cleaned (written to backing store; swap space, a regular file, etc.). Accurate modified information must be provided by the pmap module for correct operation of the virtual memory system.
Note that modified/referenced information is only tracked for
pages managed by the virtual memory system (i.e., pages for
which a vm_page structure exists).
In addition, only
``managed''
mappings of those pages have modified/referenced tracking.
Mappings entered with the
pmap_enter()
function are
``managed''
mappings.
It is possible for
``unmanaged''
mappings of a page to be created, using the
pmap_kenter_pa(
)
function.
The use of
``unmanaged''
mappings should be limited to code which may execute in interrupt context
(for example, the kernel memory allocator), or to enter mappings for
physical addresses which are not managed by the virtual memory system.
``Unmanaged''
mappings may only be entered into the kernel's virtual address space.
This constraint is placed on the callers of the
pmap_kenter_pa(
)
and
pmap_kremove(
)
functions so that the
pmap
implementation need not block interrupts when manipulating data
structures or holding locks.
Also note that the modified/referenced information must be tracked
on a per-page basis; they are not attributes of a mapping, but attributes
of a page.
Therefore, even after all mappings for a given page have
been removed, the modified/referenced information for that page
must
be preserved.
The only time the modified/referenced attributes may
be cleared is when the virtual memory system explicitly calls the
pmap_clear_modify()
and
pmap_clear_reference(
)
functions.
These functions must also change any internal state necessary to detect
the page being modified or referenced again after the modified or
referenced state is cleared.
(Prior to
NetBSD1.6,
pmap
implementations could get away without this because UVM (and Mach VM
before that) always called
pmap_page_protect(
)
before clearing the modified or referenced state, but UVM has been changed
to not do this anymore, so all
pmap
implementations must now handle this.)
A ``resident'' page is one for which a mapping exists. This statistic is used to compute the resident size of a process and enforce resource limits. Only pages (whether managed by the virtual memory system or not) which are mapped into a physical map should be counted in the resident count.
A ``wired'' page is one for which a wired mapping exists. This statistic is used to enforce resource limits.
Note that it is recommended (though not required) that the
pmap
implementation use the
pmap_statistics
structure in the tracking of
pmap
statistics by placing it inside the
pmap
structure and adjusting the counts when mappings are established, changed,
or removed.
This avoids potentially expensive data structure traversals when the
statistics are queried.
, void
)
)
to initialize any data structures that the module needs to
manage physical maps.
, void
)
pmap
structure that maps the kernel virtual address space.
Note that this function may be provided as a C pre-processor macro.
, vaddr_t *vstartp
, vaddr_t *vendp
)
)
function is called to determine the initial kernel virtual address
space beginning and end.
These values are used to create the kernel's virtual memory map.
The function must set
*vstartp
to the first kernel virtual address that will be managed by
uvm(9),
and must set
*vendp
to the last kernel virtual address that will be managed by
uvm(9).
If the
pmap_growkernel()
feature is used by a
pmap
implementation, then
*vendp
should be set to the maximum kernel virtual address allowed by the
implementation.
If
pmap_growkernel()
is not used, then
*vendp
must
be set to the maximum kernel virtual address that can be mapped with
the resources currently allocated to map the kernel virtual address
space.
, void
)
, pmap_t pmap
)
, pmap_t pmap
)
, pmap_t pmap
)
pmap
.
Note that this function may be provided as a C pre-processor macro.
, pmap_t pmap
)
pmap
.
Note that this function may be provided as a C pre-processor macro.
, pmap_t pmap
, vaddr_t va
, paddr_t pa
, vm_prot_t prot
, int flags
)
pmap
for the physical address
pa
at the virtual address
va
with protection specified by bits in
prot
:
The
flags
argument contains protection bits (the same bits as used in the
prot
argument) indicating the type of access that caused the mapping to
be created.
This information may be used to seed modified/referenced
information for the page being mapped, possibly avoiding redundant faults
on platforms that track modified/referenced information in software.
Other information provided by
flags
:
)
is allowed to fail.
If this flag is
not
set, and the
pmap_enter(
)
call is unable to create the mapping, perhaps due to insufficient
resources, the
pmap
module must panic.
The access type provided in the
flags
argument will never exceed the protection specified by
prot
.
The
pmap
implementation may assert this.
Note that on systems that do not provide hardware support for
tracking modified/referenced information, modified/referenced
information for the page
must
be seeded with the access type provided in
flags
if the
PMAP_WIRED
flag is set.
This is to prevent a fault for the purpose of tracking
modified/referenced information from occurring while the system is in
a critical section where a fault would be unacceptable.
Note that
pmap_enter()
is sometimes called to enter a mapping at a virtual address
for which a mapping already exists.
In this situation, the implementation must take whatever action is
necessary to invalidate the previous mapping before entering the new one.
Also note that
pmap_enter()
is sometimes called to change the protection for a pre-existing
mapping, or to change the
``wired''
attribute for a pre-existing mapping.
The
pmap_enter()
function returns 0 on success or an error code indicating the mode
of failure.
, pmap_t pmap
, vaddr_t sva
, vaddr_t eva
)
sva
to
eva
from the specified physical map.
, pmap_t pmap
)
pmap
will be removed before any more entries are entered.
Following this call, there will be
pmap_remove(
)
calls resulting in every mapping being removed, followed by either
pmap_destroy(
)
or
pmap_update(
).
No other
pmap
interfaces which take
pmap
as an argument will be called during this process.
Other interfaces which might need to access
pmap
(such as
pmap_page_protect(
))
are permitted during this process.
The
pmap
implementation is free to either remove all the
pmap's
mappings immediately in
pmap_remove_all(),
or to use the knowledge of the upcoming
pmap_remove(
)
calls to optimize the removals (or to just ignore this call).
, pmap_t pmap
, vaddr_t sva
, vaddr_t eva
, vm_prot_t prot
)
sva
to
eva
in the specified physical map.
, pmap_t pmap
, vaddr_t va
)
va
.
, pmap_t pmap
, vaddr_t va
, paddr_t *pap
)
)
should return the physical address for any kernel-accessible address,
including KSEG-style direct-mapped kernel addresses.
The
pmap_extract()
function returns
false
if a mapping for
va
does not exist.
Otherwise, it returns
true
and places the physical address mapped at
va
into
*pap
if the
pap
argument is non-NULL.
, vaddr_t va
, paddr_t pa
, vm_prot_t prot
)
pa
at virtual address
va
with protection
prot
into the kernel physical map.
Mappings of this type are always
``wired'',
and are unaffected by routines that alter the protection of pages
(such as
pmap_page_protect(
)).
Such mappings are also not included in the gathering of modified/referenced
information about a page.
Mappings entered with
pmap_kenter_pa(
)
by machine-independent code
must not
have execute permission, as the
data structures required to track execute permission of a page may not
be available to
pmap_kenter_pa(
).
Machine-independent code is not allowed to enter a mapping with
pmap_kenter_pa(
)
at a virtual address for which a valid mapping already exists.
Mappings created with
pmap_kenter_pa(
)
may be removed only with a call to
pmap_kremove(
).
Note that
pmap_kenter_pa()
must be safe for use in interrupt context.
splvm(
)
blocks interrupts that might cause
pmap_kenter_pa(
)
to be called.
, vaddr_t va
, vsize_t size
)
va
for
size
bytes from the kernel physical map.
All mappings that are removed must be the
``unmanaged''
type created with
pmap_kenter_pa(
).
The implementation may assert this.
, pmap_t dst_map
, pmap_t src_map
, vaddr_t dst_addr
, vsize_t len
, vaddr_t src_addr
)
src_addr
in
src_map
for
len
bytes into
dst_map
starting at
dst_addr
.
Note that while this function is required to be provided by a
pmap
implementation, it is not actually required to do anything.
pmap_copy()
is merely advisory (it is used in the
fork(2)
path to
``pre-fault''
the child's address space).
, pmap_t pmap
)
)
is called.
Note that while this function is required to be provided by a
pmap
implementation, it is not actually required to do anything.
pmap_collect()
is merely advisory.
It is recommended, however, that
pmap_collect(
)
be fully implemented by a
pmap
implementation.
, pmap_t pmap
)
),
pmap_remove(
),
pmap_protect(
),
pmap_kenter_pa(
),
and
pmap_kremove(
)
in order to ensure correct operation of the virtual memory system.
If a
pmap
implementation does not delay virtual-to-physical mapping updates,
pmap_update()
has no operation.
In this case, the call may be deleted using a C pre-processor macro in
<
machine/pmap.h
>.
, struct lwp *l
)
l
.
This is called by the virtual memory system when the
virtual memory context for a process is changed, and is also
often used by machine-dependent context switch code to program
the memory management hardware with the process's page table
base, etc.
Note that
pmap_activate(
)
may not always be called when
l
is the current lwp.
pmap_activate(
)
must be able to handle this scenario.
, struct lwp *l
)
l
.
It is generally used in conjunction with
pmap_activate(
).
Like
pmap_activate(
),
pmap_deactivate(
)
may not always be called when
l
is the current lwp.
, paddr_t pa
)
pa
.
The
pmap
implementation must take whatever steps are necessary to map the
page to a kernel-accessible address and zero the page.
It is suggested that implementations use an optimized zeroing algorithm,
as the performance of this function directly impacts page fault performance.
The implementation may assume that the region is
PAGE_SIZE aligned and exactly PAGE_SIZE bytes in length.
Note that the cache configuration of the platform should also be
considered in the implementation of
pmap_zero_page().
For example, on systems with a physically-addressed cache, the cache
load caused by zeroing the page will not be wasted, as the zeroing is
usually done on-demand.
However, on systems with a virtually-addressed cached, the cache load
caused by zeroing the page
will
be wasted, as the page will be mapped at a virtual address which is
different from that used to zero the page.
In the virtually-addressed cache case, care should also be taken to
avoid cache alias problems.
, paddr_t src
, paddr_t dst
)
src
to the same sized region starting at physical address
dst
.
The
pmap
implementation must take whatever steps are necessary to map the
source and destination pages to a kernel-accessible address and
perform the copy.
It is suggested that implementations use an optimized copy algorithm,
as the performance of this function directly impacts page fault performance.
The implementation may assume that both regions are PAGE_SIZE aligned
and exactly PAGE_SIZE bytes in length.
The same cache considerations that apply to
pmap_zero_page()
apply to
pmap_copy_page(
).
, struct vm_page *pg
, vm_prot_t prot
)
pg
to
prot
.
This function is used by the virtual memory system to implement
copy-on-write (called with VM_PROT_READ set in
prot
)
and to revoke all mappings when cleaning a page (called with
no bits set in
prot
).
Access permissions must never be added to a page as a result of
this call.
, struct vm_page *pg
)
pg
.
The
pmap_clear_modify()
function returns
true
or
false
indicating whether or not the
``modified''
attribute was set on the page before it was cleared.
Note that this function may be provided as a C pre-processor macro.
, struct vm_page *pg
)
pg
.
The
pmap_clear_reference()
function returns
true
or
false
indicating whether or not the
``referenced''
attribute was set on the page before it was cleared.
Note that this function may be provided as a C pre-processor macro.
, struct vm_page *pg
)
pg
.
Note that this function may be provided as a C pre-processor macro.
, struct vm_page *pg
)
pg
.
Note that this function may be provided as a C pre-processor macro.
, paddr_t cookie
)
)
function into a physical address.
This function is provided to accommodate systems which have physical
address spaces larger than can be directly addressed by the platform's
paddr_t
type.
The existence of this function is highly dubious, and it is
expected that this function will be removed from the
pmap
API in a future release of
NetBSD.
Note that this function may be provided as a C pre-processor macro.
, vsize_t size
, vaddr_t *vstartp
, vaddr_t *vendp
)
).
Note that memory allocated with
pmap_steal_memory(
)
will never be freed, and mappings made by
pmap_steal_memory(
)
must never be
``forgotten''.
Note that
pmap_steal_memory()
should not be used as a general-purpose early-startup memory
allocation routine.
It is intended to be used only by the
uvm_pageboot_alloc(
)
routine and its supporting routines.
If you need to allocate memory before the virtual memory system is
initialized, use
uvm_pageboot_alloc(
).
See
uvm(9)
for more information.
The
pmap_steal_memory()
function returns the kernel-accessible address of the allocated memory.
If no memory can be allocated, or if allocated memory cannot be mapped,
the function must panic.
If the
pmap_steal_memory()
function uses address space from the range provided to
uvm(9)
by the
pmap_virtual_space(
)
call, then
pmap_steal_memory(
)
must adjust
*vstartp
and
*vendp
upon return.
The
pmap_steal_memory()
function is enabled by defining the C pre-processor macro
PMAP_STEAL_MEMORY
in
<machine/pmap.h
>.
, vaddr_t maxkvaddr
)
The
pmap_growkernel()
interface is designed to help alleviate this problem.
The virtual memory startup code may choose to allocate an initial set
of mapping resources (e.g., page tables) and set an internal variable
indicating how much kernel virtual address space can be mapped using
those initial resources.
Then, when the virtual memory system wishes to map something
at an address beyond that initial limit, it calls
pmap_growkernel(
)
to pre-allocate more sources with which to create the mapping.
Note that once additional kernel virtual address space mapping resources
have been allocated, they should not be freed; it is likely they will
be needed again.
The
pmap_growkernel()
function returns the new maximum kernel virtual address that can be mapped
with the resources it has available.
If new resources cannot be allocated,
pmap_growkernel(
)
must panic.
The
pmap_growkernel()
function is enabled by defining the C pre-processor macro
PMAP_GROWKERNEL
in
<machine/pmap.h
>.
, pmap_t src_map
, pmap_t dst_map
)
The
pmap_fork()
function is provided as a way to associate information from
src_map
with
dst_map
when a
vmspace
is forked.
pmap_fork()
is called from
uvmspace_fork(
).
The
pmap_fork()
function is enabled by defining the C pre-processor macro
PMAP_FORK
in
<machine/pmap.h
>.
, paddr_t pa
)
PMAP_MAP_POOLPAGE()
returns the kernel-accessible address of the page being mapped.
It must always succeed.
The use of
PMAP_MAP_POOLPAGE()
is enabled by defining it as a C pre-processor macro in
<
machine/pmap.h
>.
If
PMAP_MAP_POOLPAGE()
is defined,
PMAP_UNMAP_POOLPAGE(
)
must also be defined.
The following is an example of how to define
PMAP_MAP_POOLPAGE():
#define PMAP_MAP_POOLPAGE(pa) MIPS_PHYS_TO_KSEG0((pa))
This takes the physical address of a page and returns the KSEG0 address of that page on a MIPS processor.
, vaddr_t va
)
).
PMAP_UNMAP_POOLPAGE()
returns the physical address of the page corresponding to the
provided kernel-accessible address.
The use of
PMAP_UNMAP_POOLPAGE()
is enabled by defining it as a C pre-processor macro in
<
machine/pmap.h
>.
If
PMAP_UNMAP_POOLPAGE()
is defined,
PMAP_MAP_POOLPAGE(
)
must also be defined.
The following is an example of how to define
PMAP_UNMAP_POOLPAGE():
#define PMAP_UNMAP_POOLPAGE(pa) MIPS_KSEG0_TO_PHYS((va))
This takes the KSEG0 address of a previously-mapped pool page and returns the physical address of that page on a MIPS processor.
, vaddr_t hint
, vaddr_t *vap
, vsize_t sz
, int td
)
vap
will be advanced.
hint
is an object offset which will be mapped into the resulting virtual address, and
sz
is size of the object.
td
indicates if the machine dependent pmap uses the topdown VM.
The use of
PMAP_PREFER()
is enabled by defining it as a C pre-processor macro in
<
machine/pmap.h
>.
, struct proc *p
, vaddr_t va
, vsize_t size
)
p
.
This function is typically used to flush instruction caches
after code modification.
The use of
pmap_procwr()
is enabled by defining a C pre-processor macro
PMAP_NEED_PROCWR
in
<machine/pmap.h
>.
Between 4.3BSD and 4.4BSD, the Mach virtual memory system, including the pmap API, was ported to BSD and included in the 4.4BSD release.
NetBSD inherited the BSD version of the Mach virtual memory system. NetBSD1.4 was the first NetBSD release with the new uvm(9) virtual memory system, which included several changes to the pmap API. Since the introduction of uvm(9), the pmap API has evolved further.
Mike Hibler
did the integration of the Mach virtual memory system into
4.4BSD
and implemented a
pmap
module for the Motorola 68020+68851/68030/68040.
The
pmap
API as it exists in
NetBSD
is derived from
4.4BSD,
and has been modified by
Chuck Cranor
,
Charles M. Hannum
,
Chuck Silvers
,
Wolfgang Solfrank
,
Bill Sommerfeld
,
and
Jason R. Thorpe
.
The author of this document is
Jason R. Thorpe
<thorpej@NetBSD.org>.
)
and
pmap_deactivate(
)
needs to be reexamined.
The use of
pmap_copy()
needs to be reexamined.
Empirical evidence suggests that performance of the system suffers when
pmap_copy(
)
actually performs its defined function.
This is largely due to the fact that the copy of the virtual-to-physical
mappings is wasted if the process calls
execve(2)
after
fork(2).
For this reason, it is recommended that
pmap
implementations leave the body of the
pmap_copy(
)
function empty for now.