Age | Commit message (Collapse) | Author |
|
access and LDT access
|
|
* i386/i386at/model_dep.c (halt_all_cpus): Change halt message to better
explain what happened.
|
|
* linux/dev/glue/block.c (rdwr_full): Set BH_Bounce if the physical
address of the user data is not in directmap.
|
|
This happens if passed count is 0.
Reported by Richard Braun.
* linux/dev/glue/block.c (device_write): Set copy variable before
vm_map_copy_discard() is called.
|
|
* kern/task.c (task_ledger_acquire): Remove function that I added by
accident.
(task_ledger_release): Likewise.
|
|
* kern/task.c (task_create_kernel): Handle NULL parent tasks.
|
|
Check receiver in task_create. Fixes a crash when sending that
message to a non-task port.
* kern/bootstrap.c (boot_script_task_create): Use the new function.
* kern/task.c (task_create): Rename to task_create_internal, create a
new function in its place that checks the receiver first.
* kern/task.h (task_create_internal): New prototype.
|
|
* linux/dev/glue/net.c (device_write): Remove unused variables.
|
|
* i386/intel/pmap.c: Drop the register qualifier.
* ipc/ipc_kmsg.h: Likewise.
* kern/bootstrap.c: Likewise.
* kern/profile.c: Likewise.
* kern/thread.c: Likewise.
* vm/vm_object.c: Likewise.
|
|
Previously, we used an invalid pointer to mark interrupts as reserved
by Mach. This, however, crashes code trying to iterate over the list
of interrupt handlers. Use a valid structure instead.
* linux/dev/arch/i386/kernel/irq.c (reserved_mach_handler): New
function.
(reserved_mach): New variable.
(reserve_mach_irqs): Use the new variable.
|
|
* device/dev_forward.defs: Remove unused file.
|
|
* vm/vm_object.c (vm_object_accept_old_init_protocol): Remove.
(vm_object_enter): Adapt.
|
|
* i386/i386/db_interface.h (db_read_bytes): Return boolean_t instead of
void.
* i386/i386/db_interface.c (db_user_to_kernel_address): Return -1
instead of calling db_error() if address is bogus.
(db_read_bytes): Return FALSE instead of calling db_error() if address
is bogus.
* ddb/db_access.c (db_get_task_value): Return 0 if db_read_bytes failed.
* ddb/db_examine.c (db_xcdump): Only print * if db_read_bytes failed.
|
|
* i386/intel/pmap.c (pmap_remove): Fix iteration over page directory.
(pmap_enter): Explain why it is ok here.
|
|
* vm/vm_map.c (vm_map_create): Gracefully handle resource exhaustion.
(vm_map_fork): Likewise at the callsite.
|
|
|
|
Maps '$mapXX' to a VM map structure address. @var{xx} is a task
identification number printed by a @code{show all tasks} command.
* ddb/db_task_thread.c (db_get_map): New function.
* ddb/db_task_thread.h (db_get_map): New declaration.
* ddb/db_variables.c (db_vars): Add new variable.
* doc/mach.texi: Document this.
|
|
* vm/vm_fault.c (vm_fault_page): Mute paging error message if the
objects pager is NULL. This happens when a pager is destroyed,
e.g. at system shutdown time when the root filesystem terminates.
|
|
* ddb/db_command.c (db_debug_all_traps_cmd): New declaration and
function.
(db_debug_port_references_cmd): Likewise.
* doc/mach.texi: Describe new commands.
* i386/i386/db_interface.h (db_debug_all_traps): New declaration.
* i386/i386/trap.c (db_debug_all_traps): New function.
* ipc/mach_port.c (db_debug_port_references): New function.
* ipc/mach_port.h (db_debug_port_references): New declaration.
|
|
* ddb/db_print.c (OPTION_SCHED): New macro.
(db_print_thread): Display scheduling information if the flag is
given.
(db_print_task): Adapt.
(db_show_all_threads): Parse new modifier.
(db_show_one_thread): Likewise.
* doc/mach.texi: Document the new flag.
|
|
|
|
* kern/host.c (host_info): Scale 'min_quantum' by 'tick', then convert
to milliseconds.
|
|
* Makefile.am (clib_routines): Steal '__divdi3' from the gcc runtime.
|
|
* i386/intel/pmap.c (pmap_remove_range): Make function static.
* i386/intel/pmap.h (pmap_remove_range): Remove declaration.
|
|
* i386/i386at/rtc.c (readtodc): Do not spuriously add 70 to the year.
|
|
* i386/i386at/rtc.c (CENTURY_START): New macro.
(readtodc): Use CENTURY_START instead of assuming it is equal to 1970,
and set yr to an absolute date before calling yeartoday.
|
|
* Makefile.am (clib_routines): Add __udivmoddi4.
* linux/src/include/linux/compiler-gcc7.h: New file.
|
|
|
|
* kern/bootstrap.c (bootstrap_create): Insert the variable
'kernel-task' into the bootscript environment. Userspace can use this
instead of guessing based on the order of the first tasks.
|
|
|
|
* kern/atomic.h: New file.
* kern/kmutex.h: New file.
* kern/kmutex.c: New file.
* Makefrag.am (libkernel_a_SOURCES): Add atomic.h, kmutex.h, kmutex.c.
* kern/sched_prim.h (thread_wakeup_prim): Make it return boolean_t.
* kern/sched_prim.c (thread_wakeup_prim): Return TRUE if we woke a
thread, and FALSE otherwise.
|
|
In practice, fixes 2100, 2200, 2300, 2500, 2600, 2700, etc.
* i386/i386at/rtc.c (yeartoday): Make years divisible by 100 but not
divisible by 400 non-bisextile.
|
|
Commit eb07428ffb0009085fcd01dd1b79d9953af8e0ad does fix pageout of
external objects backed by the default pager, but the way it's done
has a vicious side effect: because they're considered external, the
pageout daemon can keep evicting them even though the external pagers
haven't released them, unlike internal pages which must all be
released before the pageout daemon can make progress. This can lead
to a situation where too many pages become wired, the default pager
cannot allocate memory to process new requests, and the pageout
daemon cannot recycle any more page, causing a panic.
This change makes the pageout daemon use the same strategy for both
internal pages and external pages sent to the default pager: use
the laundry bit and wait for all laundry pages to be released,
thereby completely synchronizing the pageout daemon and the default
pager.
* vm/vm_page.c (vm_page_can_move): Allow external laundry pages to
be moved.
(vm_page_seg_evict): Don't alter the `external_laundry' bit, merely
disable double paging for external pages sent to the default pager.
* vm/vm_pageout.c: Include vm/memory_object.h.
(vm_pageout_setup): Don't check whether the `external_laundry' bit
is set, but handle external pages sent to the default pager the same
as internal pages.
|
|
Sometimes, in particular during IO spikes, the slab allocator needs
more virtual memory than is currently available. The new size should
also be fine for the Xen version.
* i386/i386/vm_param.h (VM_KERNEL_MAP_SIZE): Increase value.
|
|
* doc/mach.texi: Describe vm_wire_all, and add more information
about vm_wire and vm_protect.
|
|
This call maps the POSIX mlockall and munlockall calls.
* Makefrag.am (include_mach_HEADERS): Add include/mach/vm_wire.h.
* include/mach/gnumach.defs (vm_wire_t): New type.
(vm_wire_all): New routine.
* include/mach/mach_types.h: Include mach/vm_wire.h.
* vm/vm_map.c: Likewise.
(vm_map_enter): Automatically wire new entries if requested.
(vm_map_copyout): Likewise.
(vm_map_pageable_all): New function.
vm/vm_map.h: Include mach/vm_wire.h.
(struct vm_map): Update description of member `wiring_required'.
(vm_map_pageable_all): New function.
* vm/vm_user.c (vm_wire_all): New function.
|
|
First, user wiring is removed, simply because it has never been used.
Second, make the VM system track wiring requests to better handle
protection. This change makes it possible to wire entries with
VM_PROT_NONE protection without actually reserving any page for
them until protection changes, and even make those pages pageable
if protection is downgraded to VM_PROT_NONE.
* ddb/db_ext_symtab.c: Update call to vm_map_pageable.
* i386/i386/user_ldt.c: Likewise.
* ipc/mach_port.c: Likewise.
* vm/vm_debug.c (mach_vm_region_info): Update values returned
as appropriate.
* vm/vm_map.c (vm_map_entry_copy): Update operation as appropriate.
(vm_map_setup): Update member names as appropriate.
(vm_map_find_entry): Update to account for map member variable changes.
(vm_map_enter): Likewise.
(vm_map_entry_inc_wired): New function.
(vm_map_entry_reset_wired): Likewise.
(vm_map_pageable_scan): Likewise.
(vm_map_protect): Update wired access, call vm_map_pageable_scan.
(vm_map_pageable_common): Rename to ...
(vm_map_pageable): ... and rewrite to use vm_map_pageable_scan.
(vm_map_entry_delete): Fix unwiring.
(vm_map_copy_overwrite): Replace inline code with a call to
vm_map_entry_reset_wired.
(vm_map_copyin_page_list): Likewise.
(vm_map_print): Likewise. Also print map size and wired size.
(vm_map_copyout_page_list): Update to account for map member variable
changes.
* vm/vm_map.h (struct vm_map_entry): Remove `user_wired_count' member,
add `wired_access' member.
(struct vm_map): Rename `user_wired' member to `size_wired'.
(vm_map_pageable_common): Remove function.
(vm_map_pageable_user): Remove macro.
(vm_map_pageable): Replace macro with function declaration.
* vm/vm_user.c (vm_wire): Update call to vm_map_pageable.
|
|
Double paging on such objects causes deadlocks.
* vm/vm_page.c: Include <vm/memory_object.h>.
(vm_page_seg_evict): Rename laundry to double_paging to increase
clarity. Set the `external_laundry' bit when evicting a page
from an external object backed by the default pager.
* vm/vm_pageout.c (vm_pageout_setup): Wire page if the
`external_laundry' bit is set.
|
|
Unlike laundry pages sent to the default pager, pages marked with the
`external_laundry' bit remain in the page queues and must be filtered
out by the pageability check.
* vm/vm_page.c (vm_page_can_move): Check the `external_laundry' bit.
|
|
Memory wiring is about to be reworked, at which point the VM system
will properly track wired mappings. Removing them when changing
protection makes sense, and is fine as long as the VM system
rewires them when access is restored.
* i386/intel/pmap.c (pmap_page_protect): Decrease wiring count instead
of causing a panic when removing a wired mapping.
|
|
The interval parameter to the thread_set_timeout function is actually
in ticks.
* vm/vm_pageout.c (vm_pageout): Fix call to thread_set_timeout.
|
|
* version.m4 (AC_PACKAGE_VERSION): Set to 1.8.
* NEWS: Finalize for 1.8.
|
|
* doc/mach.texi: Update return codes.
* vm/vm_map.c (vm_map_pageable_common): Return KERN_NO_SPACE instead
of KERN_FAILURE if some of the specified address range does not
correspond to mapped pages. Skip unwired entries instead of failing
when unwiring.
|
|
|
|
* kern/rbtree.h (rbtree_for_each_remove): Remove trailing slash.
|
|
Since the VM system has been tracking whether pages belong to internal
or external objects, pageout throttling to external pagers has simply
not been working. The reason is that, on pageout, requests for external
pages are correctly tracked, but on page release (which is used to
acknowledge the request), external pages are not marked external
any more. This is because the external bit tracks whether a page
belongs to an external object, and all pages, including external
ones, are moved to an internal object during pageout.
To solve this issue, a new "external_laundry" bit is added. It has
the same purpose as the laundry bit, but for external pagers.
* vm/vm_page.c (vm_page_seg_min_page_available): Function unused, remove.
(vm_page_seg_evict): Use vm_page_external_laundry_count instead of
vm_page_external_pagedout. Add an assertion about double paging.
(vm_page_check_usable): Use vm_page_external_laundry_count instead of
vm_page_external_pagedout.
(vm_page_evict): Likewise.
* vm/vm_page.h (struct vm_page): New `external_laundry' member.
(vm_page_external_pagedout): Rename to ...
(vm_page_external_laundry_count): ... this.
* vm/vm_pageout.c: Include kern/printf.h.
(DEBUG): New macro.
(VM_PAGEOUT_TIMEOUT): Likewise.
(vm_pageout_setup): Use vm_page_external_laundry_count instead of
vm_page_external_pagedout. Set `external_laundry' where appropriate.
(vm_pageout): Use VM_PAGEOUT_TIMEOUT with thread_set_timeout.
Add debugging code, commented out by default.
* vm/vm_resident.c (vm_page_external_pagedout): Rename to ...
(vm_page_external_laundry_count): ... this.
(vm_page_init_template): Set `external_laundry' member to FALSE.
(vm_page_release): Rename external parameter to external_laundry.
Slightly change pageout resuming.
(vm_page_free): Rename external variable to external_laundry.
|
|
Instead of determining if memory is low, directly use the
vm_page_alloc_paused variable, which is true when memory has reached
a minimum threshold until it gets back above the high thresholds.
This makes sure double paging is used when external pagers are unable
to allocate memory.
* vm/vm_page.c (vm_page_seg_evict): Rename low_memory to alloc_paused.
(vm_page_evict_once): Remove low_memory and its computation. Blindly
pass the new alloc_paused argument instead.
(vm_page_evict): Pass the value of vm_page_alloc_paused to
vm_page_evict_once.
|
|
* vm/vm_page.c (vm_page_evict): Test both vm_page_external_pagedout
and vm_page_laundry_count in order to determine there was "no pageout".
|
|
When checking whether to continue paging out or not, the pageout daemon
only considers the high free page threshold of a segment. But if e.g.
the default pager had to allocate reserved pages during a previous
pageout cycle, it could have exhausted a segment (this is currently
only seen with the DMA segment). In that case, the high threshold
cannot be reached because the segment has currently no pageable page.
This change makes the pageout daemon identify this condition and
consider the segment as usable in order to make progress. The segment
will simply be ignored on the allocation path for unprivileged threads,
and if this happens with too many segments, the system will fail at
allocation time.
* vm/vm_page.c (vm_page_seg_usable): Report usable if the segment has
no pageable page.
|
|
* kern/gsync.c (gsync_wait, gsync_wake, gsync_requeue):
Return immediately if task argument is TASK_NULL
|