context

  • text [from textus 'woven material', from texere 'to weave']
    • 《生活大爆炸》里提到ununravelable,它的词根是ravel(缠结) → unravel(解开) →生活……像一团麻,净是些解不开的死疙瘩呀
  • textile [Origin: textilis 'woven', from texere] 织物; 纺织品
  • texture [from texere] (织物的)疏密, 松紧; (物体表面) 质地, 外观, 手感
  • context [from com- + texere 'to weave']
  • pretext [from praetexere 'to weave in front, make an excuse'] 藉jiè口(借口); 托辞(托词)

In computing,

  • A task is an execution context. 
  • A thread is the basic, atomic unit of CPU utilization. →fibre, thread, string, cord, rope
  • A process is the instance of a program that is being executed.  A process may be made up of multiple threads.
  • A kernel thread is a way to implement background tasks inside the kernel.

Linux kernel的task_struct (重点:weave, woven →一团乱麻):

/* https://azrael.digipen.edu/~mmead/www/Courses/CS180/task_struct-linux.html */
struct task_struct {
  volatile long state;  /* -1 unrunnable, 0 runnable, >0 stopped */
  void *stack;
  atomic_t usage;
  unsigned int flags; /* per process flags, defined below */
  unsigned int ptrace;

  int lock_depth;   /* BKL lock depth */

#ifdef CONFIG_SMP
#ifdef __ARCH_WANT_UNLOCKED_CTXSW
  int oncpu;
#endif
#endif

  int prio, static_prio, normal_prio;
  unsigned int rt_priority;
  const struct sched_class *sched_class;
  struct sched_entity se;
  struct sched_rt_entity rt;

#ifdef CONFIG_PREEMPT_NOTIFIERS
  /* list of struct preempt_notifier: */
  struct hlist_head preempt_notifiers;
#endif

  /*
   * fpu_counter contains the number of consecutive context switches
   * that the FPU is used. If this is over a threshold, the lazy fpu
   * saving becomes unlazy to save the trap. This is an unsigned char
   * so that after 256 times the counter wraps and the behavior turns
   * lazy again; this to deal with bursty apps that only use FPU for
   * a short time
   */
  unsigned char fpu_counter;
#ifdef CONFIG_BLK_DEV_IO_TRACE
  unsigned int btrace_seq;
#endif

  unsigned int policy;
  cpumask_t cpus_allowed;

#ifdef CONFIG_PREEMPT_RCU
  int rcu_read_lock_nesting;
  char rcu_read_unlock_special;
  struct list_head rcu_node_entry;
#endif /* #ifdef CONFIG_PREEMPT_RCU */
#ifdef CONFIG_TREE_PREEMPT_RCU
  struct rcu_node *rcu_blocked_node;
#endif /* #ifdef CONFIG_TREE_PREEMPT_RCU */
#ifdef CONFIG_RCU_BOOST
  struct rt_mutex *rcu_boost_mutex;
#endif /* #ifdef CONFIG_RCU_BOOST */

#if defined(CONFIG_SCHEDSTATS) || defined(CONFIG_TASK_DELAY_ACCT)
  struct sched_info sched_info;
#endif

  struct list_head tasks;
#ifdef CONFIG_SMP
  struct plist_node pushable_tasks;
#endif

  struct mm_struct *mm, *active_mm;
#if defined(SPLIT_RSS_COUNTING)
  struct task_rss_stat  rss_stat;
#endif
/* task state */
  int exit_state;
  int exit_code, exit_signal;
  int pdeath_signal;  /*  The signal sent when the parent dies  */
  /* ??? */
  unsigned int personality;
  unsigned did_exec:1;
  unsigned in_execve:1; /* Tell the LSMs that the process is doing an
         * execve */
  unsigned in_iowait:1;


  /* Revert to default priority/policy when forking */
  unsigned sched_reset_on_fork:1;

  pid_t pid;
  pid_t tgid;

#ifdef CONFIG_CC_STACKPROTECTOR
  /* Canary value for the -fstack-protector gcc feature */
  unsigned long stack_canary;
#endif

  /* 
   * pointers to (original) parent process, youngest child, younger sibling,
   * older sibling, respectively.  (p->father can be replaced with 
   * p->real_parent->pid)
   */
  struct task_struct *real_parent; /* real parent process */
  struct task_struct *parent; /* recipient of SIGCHLD, wait4() reports */
  /*
   * children/sibling forms the list of my natural children
   */
  struct list_head children;  /* list of my children */
  struct list_head sibling; /* linkage in my parent's children list */
  struct task_struct *group_leader; /* threadgroup leader */

  /*
   * ptraced is the list of tasks this task is using ptrace on.
   * This includes both natural children and PTRACE_ATTACH targets.
   * p->ptrace_entry is p's link on the p->parent->ptraced list.
   */
  struct list_head ptraced;
  struct list_head ptrace_entry;

  /* PID/PID hash table linkage. */
  struct pid_link pids[PIDTYPE_MAX];
  struct list_head thread_group;

  struct completion *vfork_done;    /* for vfork() */
  int __user *set_child_tid;    /* CLONE_CHILD_SETTID */
  int __user *clear_child_tid;    /* CLONE_CHILD_CLEARTID */

  cputime_t utime, stime, utimescaled, stimescaled;
  cputime_t gtime;
#ifndef CONFIG_VIRT_CPU_ACCOUNTING
  cputime_t prev_utime, prev_stime;
#endif
  unsigned long nvcsw, nivcsw; /* context switch counts */
  struct timespec start_time;     /* monotonic time */
  struct timespec real_start_time;  /* boot based time */
/* mm fault and swap info: this can arguably be seen as either mm-specific or thread-specific */
  unsigned long min_flt, maj_flt;

  struct task_cputime cputime_expires;
  struct list_head cpu_timers[3];

/* process credentials */
  const struct cred __rcu *real_cred; /* objective and real subjective task
           * credentials (COW) */
  const struct cred __rcu *cred;  /* effective (overridable) subjective task
           * credentials (COW) */
  struct cred *replacement_session_keyring; /* for KEYCTL_SESSION_TO_PARENT */

  char comm[TASK_COMM_LEN]; /* executable name excluding path
             - access with [gs]et_task_comm (which lock
               it with task_lock())
             - initialized normally by setup_new_exec */
/* file system info */
  int link_count, total_link_count;
#ifdef CONFIG_SYSVIPC
/* ipc stuff */
  struct sysv_sem sysvsem;
#endif
#ifdef CONFIG_DETECT_HUNG_TASK
/* hung task detection */
  unsigned long last_switch_count;
#endif
/* CPU-specific state of this task */
  struct thread_struct thread;
/* filesystem information */
  struct fs_struct *fs;
/* open file information */
  struct files_struct *files;
/* namespaces */
  struct nsproxy *nsproxy;
/* signal handlers */
  struct signal_struct *signal;
  struct sighand_struct *sighand;

  sigset_t blocked, real_blocked;
  sigset_t saved_sigmask; /* restored if set_restore_sigmask() was used */
  struct sigpending pending;

  unsigned long sas_ss_sp;
  size_t sas_ss_size;
  int (*notifier)(void *priv);
  void *notifier_data;
  sigset_t *notifier_mask;
  struct audit_context *audit_context;
#ifdef CONFIG_AUDITSYSCALL
  uid_t loginuid;
  unsigned int sessionid;
#endif
  seccomp_t seccomp;

/* Thread group tracking */
    u32 parent_exec_id;
    u32 self_exec_id;
/* Protection of (de-)allocation: mm, files, fs, tty, keyrings, mems_allowed,
 * mempolicy */
  spinlock_t alloc_lock;

#ifdef CONFIG_GENERIC_HARDIRQS
  /* IRQ handler threads */
  struct irqaction *irqaction;
#endif

  /* Protection of the PI data structures: */
  raw_spinlock_t pi_lock;

#ifdef CONFIG_RT_MUTEXES
  /* PI waiters blocked on a rt_mutex held by this task */
  struct plist_head pi_waiters;
  /* Deadlock detection and priority inheritance handling */
  struct rt_mutex_waiter *pi_blocked_on;
#endif

#ifdef CONFIG_DEBUG_MUTEXES
  /* mutex deadlock detection */
  struct mutex_waiter *blocked_on;
#endif
#ifdef CONFIG_TRACE_IRQFLAGS
  unsigned int irq_events;
  unsigned long hardirq_enable_ip;
  unsigned long hardirq_disable_ip;
  unsigned int hardirq_enable_event;
  unsigned int hardirq_disable_event;
  int hardirqs_enabled;
  int hardirq_context;
  unsigned long softirq_disable_ip;
  unsigned long softirq_enable_ip;
  unsigned int softirq_disable_event;
  unsigned int softirq_enable_event;
  int softirqs_enabled;
  int softirq_context;
#endif
#ifdef CONFIG_LOCKDEP
# define MAX_LOCK_DEPTH 48UL
  u64 curr_chain_key;
  int lockdep_depth;
  unsigned int lockdep_recursion;
  struct held_lock held_locks[MAX_LOCK_DEPTH];
  gfp_t lockdep_reclaim_gfp;
#endif

/* journalling filesystem info */
  void *journal_info;

/* stacked block device info */
  struct bio_list *bio_list;

/* VM state */
  struct reclaim_state *reclaim_state;

  struct backing_dev_info *backing_dev_info;

  struct io_context *io_context;

  unsigned long ptrace_message;
  siginfo_t *last_siginfo; /* For ptrace use.  */
  struct task_io_accounting ioac;
#if defined(CONFIG_TASK_XACCT)
  u64 acct_rss_mem1;  /* accumulated rss usage */
  u64 acct_vm_mem1; /* accumulated virtual memory usage */
  cputime_t acct_timexpd; /* stime + utime since last update */
#endif
#ifdef CONFIG_CPUSETS
  nodemask_t mems_allowed;  /* Protected by alloc_lock */
  int mems_allowed_change_disable;
  int cpuset_mem_spread_rotor;
  int cpuset_slab_spread_rotor;
#endif
#ifdef CONFIG_CGROUPS
  /* Control Group info protected by css_set_lock */
  struct css_set __rcu *cgroups;
  /* cg_list protected by css_set_lock and tsk->alloc_lock */
  struct list_head cg_list;
#endif
#ifdef CONFIG_FUTEX
  struct robust_list_head __user *robust_list;
#ifdef CONFIG_COMPAT



  struct compat_robust_list_head __user *compat_robust_list;
#endif
  struct list_head pi_state_list;
  struct futex_pi_state *pi_state_cache;
#endif
#ifdef CONFIG_PERF_EVENTS
  struct perf_event_context *perf_event_ctxp[perf_nr_task_contexts];
  struct mutex perf_event_mutex;
  struct list_head perf_event_list;
#endif
#ifdef CONFIG_NUMA
  struct mempolicy *mempolicy;  /* Protected by alloc_lock */
  short il_next;
#endif
  atomic_t fs_excl; /* holding fs exclusive resources */
  struct rcu_head rcu;

  /*
   * cache last used pipe for splice
   */
  struct pipe_inode_info *splice_pipe;
#ifdef  CONFIG_TASK_DELAY_ACCT
  struct task_delay_info *delays;
#endif
#ifdef CONFIG_FAULT_INJECTION
  int make_it_fail;
#endif
  struct prop_local_single dirties;
#ifdef CONFIG_LATENCYTOP
  int latency_record_count;
  struct latency_record latency_record[LT_SAVECOUNT];
#endif
  /*
   * time slack values; these are used to round up poll() and
   * select() etc timeout values. These are in nanoseconds.
   */
  unsigned long timer_slack_ns;
  unsigned long default_timer_slack_ns;

  struct list_head  *scm_work_list;
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
  /* Index of current stored address in ret_stack */
  int curr_ret_stack;
  /* Stack of return addresses for return function tracing */
  struct ftrace_ret_stack *ret_stack;
  /* time stamp for last schedule */
  unsigned long long ftrace_timestamp;
  /*
   * Number of functions that haven't been traced
   * because of depth overrun.
   */
  atomic_t trace_overrun;
  /* Pause for the tracing */
  atomic_t tracing_graph_pause;
#endif
#ifdef CONFIG_TRACING
  /* state flags for use by tracers */
  unsigned long trace;
  /* bitmask of trace recursion */
  unsigned long trace_recursion;
#endif /* CONFIG_TRACING */
#ifdef CONFIG_CGROUP_MEM_RES_CTLR /* memcg uses this to do batch job */
  struct memcg_batch_info {
    int do_batch; /* incremented when batch uncharge started */
    struct mem_cgroup *memcg; /* target memcg of uncharge */
    unsigned long bytes;    /* uncharged usage */
    unsigned long memsw_bytes; /* uncharged mem+swap usage */
  } memcg_batch;
#endif
View Code

A context switch is the process of storing the state of a process or thread, so that it can be restored and resume execution at a later point. This allows multiple processes to share a single central processing unit (CPU), and is an essential feature of a multitasking operating system.

The precise meaning of the phrase "context switch" varies. In a multitasking context, it refers to the process of storing the system state for one task, so that task can be paused and another task resumed. A context switch can also occur as the result of an interrupt, such as when a task needs to access disk storage, freeing up CPU time for other tasks. Some operating systems also require a context switch to move between user mode and kernel mode tasks. The process of context switching can have a negative impact on system performance.

Context switches are usually computationally intensive, and much of the design of operating systems is to optimize the use of context switches. Switching from one process to another requires a certain amount of time for doing the administration – saving and loading registers and memory maps, updating various tables and lists, etc. What is actually involved in a context switch depends on the architectures, operating systems, and the number of resources shared (threads that belong to the same process share many resources whether compared to unrelated non-cooperating processes. For example, in the Linux kernel, context switching involves switching registers, stack pointer (it's typical stack-pointer register), program counter, flushing the translation lookaside buffer (TLB) and loading the page table of the next process to run (unless the old process shares the memory with the new).

Furthermore, analogous context switching happens between user threads, notably green threads, and is often very lightweight, saving and restoring minimal context. In extreme cases, such as switching between goroutines in Go, a context switch is equivalent to a coroutine yield, which is only marginally more expensive than a subroutine call.

Most commonly, within some scheduling scheme, one process must be switched out of the CPU so another process can run. This context switch can be triggered by the process making itself unrunnable, such as by waiting for an I/O or synchronization operation to complete. On a pre-emptive multitasking system, the scheduler may also switch out processes that are still runnable. To prevent other processes from being starved of CPU time, pre-emptive schedulers often configure a timer interrupt to fire when a process exceeds its time slice. This interrupt ensures that the scheduler will gain control to perform a context switch.

Modern architectures are interrupt driven. This means that if the CPU requests data from a disk, for example, it does not need to busy-wait until the read is over; it can issue the request (to the I/O device) and continue with some other task. When the read is over, the CPU can be interrupted (by a hardware in this case, which sends interrupt request to PIC [Programmable Interrupt Controller]) and presented with the read. For interrupts, a program called an interrupt handler is installed, and it is the interrupt handler that handles the interrupt from the disk.

When an interrupt occurs, the hardware automatically switches a part of the context (at least enough to allow the handler to return to the interrupted code). The handler may save additional context, depending on details of the particular hardware and software designs. Often only a minimal part of the context is changed in order to minimize the amount of time spent handling the interrupt. The kernel does not spawn or schedule a special process to handle interrupts, but instead the handler executes in the (often partial) context established at the beginning of interrupt handling. Once interrupt servicing is complete, the context in effect before the interrupt occurred is restored so that the interrupted process can resume execution in its proper state.

When the system transitions between user mode and kernel mode, a context switch is not necessary; a mode transition is not by itself a context switch. However, depending on the operating system, a context switch may also take place at this time.

Context switching can be performed primarily by software or hardware. Some processors, like the Intel 80386 and its successors, have hardware support for context switches, by making use of a special data segment designated the task state segment (TSS). A task switch can be explicitly triggered with a CALL or JMP instruction targeted at a TSS descriptor in the global descriptor table. It can occur implicitly when an interrupt or exception is triggered if there's a task gate in the interrupt descriptor table (IDT). When a task switch occurs the CPU can automatically load the new state from the TSS.

As with other tasks performed in hardware, one would expect this to be rather fast; however, mainstream operating systems, including Windows and Linux, do not use this feature. This is mainly due to two reasons:

  • Hardware context switching does not save all the registers (only general-purpose registers, not floating point registers — although the TS bit is automatically turned on in the CR0 control register, resulting in a fault when executing floating-point instructions and giving the OS the opportunity to save and restore the floating-point state as needed).
  • Associated performance issues, e.g., software context switching can be selective and store only those registers that need storing, whereas hardware context switching stores nearly all registers whether they are required or not.

六级/考研单词: weave, textile, texture, pretext, compute, execute, thread, multiple, instruct, concurrent, implement, resume, indispensable, interrupt, intensive, administer, update, stack, flush, translate, buffer, seldom, equivalent, yield, expense, trigger, starve, slice, data, issue, hardware, install, transition, necessity, successor, segment, designate, implicit, globe, mainstream, float

posted @ 2022-08-19 15:29  Fun_with_Words  阅读(92)  评论(0编辑  收藏  举报









 张牌。