MIT 6.S081 Lab6 user-level threads and alarm

前言

  我前几天的blog里说Lab5 Copy On Write fork(https://www.cnblogs.com/KatyuMarisaBlog/p/13932190.html 给个赞吧球球大家了),这是我做过的最简单的Lab,今天我把这句话收回,我郑重宣布,Lab6 user-level threads and alarm才是最简单的.....一个下午+半个晚上就搞定了,绝大多数时间还用在了查各种资料来检查我担心的细节上,甚至调试都没调过几次。我又觉得我行了(你15-445 check lint没过,check clang tidy还在崩,6.824还没重构,行个锤子)

   当然,这个Lab虽然代码量很少,但思维量一点也不小,尤其是要对xv6的进程相关代码非常了解。做完这个Lab感觉对进程的了解又加深了一点,是时候摸个时间把xv6的进程做个总结了。不过这是后话,周末前肯定会做,今天先把Lab6的blog码完,反正我每周一都是半灵状态......

  Lab的链接:https://pdos.csail.mit.edu/6.828/2019/labs/syscall.html 

Warm Up

  这个warm up告诉我们,哪怕你再不想碰汇编,今天也得破个例....虽然warm up的东西没啥卵用就是了.....

  warm up部分要求我们阅读user/call.c和它的反汇编代码user/call.asm,并回答下述问题:

// user/call.c
#include "kernel/param.h"
#include "kernel/types.h"
#include "kernel/stat.h"
#include "user/user.h"

int g(int x) {
  return x+3;
}

int f(int x) {
  return g(x);
}

void main(void) {
  printf("%d %d\n", f(8)+1, 13);
  exit(0);
}

 

(1)Which registers contain arguments to functions? For example, which register holds 13 in main's call to printf ?

  大意是问我们,函数调用时使用了哪几个寄存器传参数。

  查看反汇编代码:

user/_call:     文件格式 elf64-littleriscv


Disassembly of section .text:

0000000000000000 <g>:
#include "kernel/param.h"
#include "kernel/types.h"
#include "kernel/stat.h"
#include "user/user.h"

int g(int x) {
   0:    1141                    addi    sp,sp,-16
   2:    e422                    sd    s0,8(sp)
   4:    0800                    addi    s0,sp,16
  return x+3;
}
   6:    250d                    addiw    a0,a0,3
   8:    6422                    ld    s0,8(sp)
   a:    0141                    addi    sp,sp,16
   c:    8082                    ret

000000000000000e <f>:

int f(int x) {
   e:    1141                    addi    sp,sp,-16
  10:    e422                    sd    s0,8(sp)
  12:    0800                    addi    s0,sp,16
  return g(x);
}
  14:    250d                    addiw    a0,a0,3
  16:    6422                    ld    s0,8(sp)
  18:    0141                    addi    sp,sp,16
  1a:    8082                    ret

000000000000001c <main>:

void main(void) {
  1c:    1141                    addi    sp,sp,-16
  1e:    e406                    sd    ra,8(sp)
  20:    e022                    sd    s0,0(sp)
  22:    0800                    addi    s0,sp,16
  printf("%d %d\n", f(8)+1, 13);
  24:    4635                    li    a2,13
  26:    45b1                    li    a1,12
  28:    00000517              auipc    a0,0x0
  2c:    74850513              addi    a0,a0,1864 # 770 <malloc+0xea>
  30:    00000097              auipc    ra,0x0
  34:    598080e7              jalr    1432(ra) # 5c8 <printf>
  exit(0);
  38:    4501                    li    a0,0
  3a:    00000097              auipc    ra,0x0
  3e:    1f6080e7              jalr    502(ra) # 230 <exit>
  查看反汇编代码很容易看出,a0 ~ a2 这三个寄存器用于存储printf的三个参数,a2寄存器存储立即数13。f(8) + 1是一个常量表达式,已经被编译器优化为立即数12,存放在a1寄存器中。a0寄存器存放的是常量字符串“%d %d\n”的地址,这个字符串存放在.rodata段上。我们可以查看一下user/_call的ELF文件:
ms@ubuntu:~/public/MIT 6.S081/Lab6 user level thread/xv6-riscv-fall19$ readelf -x .rodata user/_call

“.rodata”节的十六进制输出:
0x000007d8 25642025 640a0000 286e756c 6c290000 %d %d...(null)..
0x000007e8 30313233 34353637 38394142 43444546 0123456789ABCDEF
0x000007f8 00                                  .
  emmm.....这个0X7d8的字符串怎么没有换行符...不过应该就是它了。

  我不太放心,又拿gdb调试了一下call:

Reading symbols from kernel/kernel...
The target architecture is assumed to be riscv:rv64
0x0000000000001000 in ?? ()
(gdb) file user/_call 
Reading symbols from user/_call...
(gdb) b main
Breakpoint 1 at 0x1c: file user/call.c, line 14.
(gdb) c
Continuing.

Breakpoint 1, main () at user/call.c:14
14      void main(void) {
(gdb) display /i $ pc
A syntax error in expression, near `pc'.
(gdb) display /i $pc 
1: x/i $pc
=> 0x1c <main>: addi    sp,sp,-16
(gdb) si
0x000000000000001e      14      void main(void) {
1: x/i $pc
=> 0x1e <main+2>:       sd      ra,8(sp)
(gdb) 
0x0000000000000020      14      void main(void) {
1: x/i $pc
=> 0x20 <main+4>:       sd      s0,0(sp)
(gdb) 
0x0000000000000022      14      void main(void) {
1: x/i $pc
=> 0x22 <main+6>:       addi    s0,sp,16
(gdb) 
15        printf("%d %d\n", f(8)+1, 13);
1: x/i $pc
=> 0x24 <main+8>:       li      a2,13
(gdb) 
0x0000000000000026      15        printf("%d %d\n", f(8)+1, 13);
1: x/i $pc
=> 0x26 <main+10>:      li      a1,12
(gdb) 
0x0000000000000028      15        printf("%d %d\n", f(8)+1, 13);
1: x/i $pc
=> 0x28 <main+12>:      auipc   a0,0x0
(gdb) info reg a0
a0             0x1      1
(gdb) si
0x000000000000002c      15        printf("%d %d\n", f(8)+1, 13);
1: x/i $pc
=> 0x2c <main+16>:      addi    a0,a0,1968
(gdb) info reg a0
a0             0x28     40
(gdb) si
0x0000000000000030      15        printf("%d %d\n", f(8)+1, 13);
1: x/i $pc
=> 0x30 <main+20>:      auipc   ra,0x0
(gdb) info reg a0
a0             0x7d8    2008
(gdb) x /8c 0x7d8
0x7d8:  37 '%'  100 'd' 32 ' '  37 '%'  100 'd' 10 '\n' 0 '\000'        0 '\000'
(gdb) info reg ra
ra             0xfe     0xfe <gets+4>
(gdb) si
0x0000000000000034      15        printf("%d %d\n", f(8)+1, 13);
1: x/i $pc
=> 0x34 <main+24>:      jalr    1536(ra)
(gdb) info reg ra
ra             0x30     0x30 <main+20>
(gdb) 

  嗯....确认了这个0x7d8就是.rodata区内常量字符串“%d %d\n”的起始地址。

  不过这个auipc难道就是为了凑个立即数0x7d8么...搞不懂riscv的指令怎么这么怪....
  我们还要注意到调试代码的最后几行,在调用printf前,0x30已经被写入到了ra中,这正好是printf调用结束后,应当返回的地址。
  查看ELF文件的.rodata段的方法见:https://stackoverflow.com/questions/1685483/how-can-i-examine-contents-of-a-data-section-of-an-elf-file-on-linux

(2)Where is the function call to f from main? Where is the call to g? (Hint: the compiler may inline functions.)

  Hint就是答案...f和g都算是常量表达式,直接被编译器优化了。

(3)At what address is the function printf located?

  查看一下call.asm就可以知道,printf的地址是0x630。这个地址是编译器在链接的时候确定的,当链接完成后,编译器生成了_call这个ELF文件,程序执行时,xv6调用exec将ELF文件装载入进程中。

(4)What value is in the register ra just after the jalr to printf in main?

  今天写blog的时候发现这块我忘了做了.....看一下老哥的blog吧:https://blog.csdn.net/RedemptionC/article/details/107874096,大意是jalr并不是普通的跳转指令,还整了点别的活。

 Part 1 Uthread Switching

  本节让我们借鉴(copy)的kernel/swtch.S实现一个根本算不得是线程的线程切换。
  wikipedia上告诉我们:
同一进程中的多条线程将共享该进程中的全部系统资源,如虚拟地址空间,文件描述符信号处理等等。但同一进程中的多个线程有各自的调用栈call stack),自己的寄存器环境register context),自己的线程本地存储(thread-local storage)。
https://zh.wikipedia.org/wiki/%E7%BA%BF%E7%A8%8B

  线程是轻量级的,它只占有一丁点自己运行所需要的资源,最重要的三个部分就是栈空间、线程控制块、寄存器环境。user/uthread.c很贴心的给了我们线程的数据结构:我们查看一下user/uthread.c下定义的线程的数据结构:

struct thread {
  char       stack[STACK_SIZE]; /* the thread's stack */
  int        state;             /* FREE, RUNNING, RUNNABLE */
};

  可以看到,这个线程的数据结构只包含了两部分:线程控制块(其实只有一个state)和线程调用栈(stack),寄存器环需要我们自己补充。

  这个Lab的Hint告诉我们,原代码在main中创建了三个线程thread_a、thread_b、thread_c,要求我们完善thread_create的代码和thread_schedule代码,以及补充user/uthread_Switch.S的汇编代码,实现两个目的:

(1) 线程最开始调度时,执行我们传入thread_create中的函数;

(2) 当线程调用thread_yield和thread_schedule时,发生线程切换;

  查看一下thread_a、thread_b、thread_c的代码,大概意思是像国产流氓软件一样相互唤醒相互切换,当线程1刚拿到CPU后,打印一个数字,立刻放弃CPU,调度器把CPU分配给另一个线程。这样周而复始,直到三个线程都把1到100打印完毕:

void 
thread_a(void)
{
  int i;
  printf("thread_a started\n");
  a_started = 1;
  while(b_started == 0 || c_started == 0)
    thread_yield();
  
  for (i = 0; i < 100; i++) {
    printf("thread_a %d\n", i);
    a_n += 1;
    thread_yield();
  }
  printf("thread_a: exit after %d\n", a_n);

  current_thread->state = FREE;
  thread_schedule();
}

设计与实现

  首先让我们回忆一下,xv6的第一个用户进程是怎样被执行和调度的:

(1) userinit调用allocproc,该方法置p->tf->epc为0,将p->context.ra置为了kernel/proc.c下的forkret函数,直接将kernel/proc.c下的字节数组initcode拷贝到代码段作为程序代码,随后设该进程的状态为RUNNABLE,即等待调度执行

(2) kernel/proc.c下的scheduler调度该进程,使用swtch切换上下文,将当前处于scheduler函数的上下文存储到struct cpu的scheduler中,将p->context加载到CPU中;swtch函数最后会执行ret指令,该指令将ra的值拷贝到pc中。这样当swtch执行完后,pc就被置为了forkret函数

(3)当前进程执行forkret,forkret调用usertrapret

(4)usertrapret函数修改epc寄存器的值为p->tf->epc,而此时p->tf->epc是0,正好就是initcode下main函数的虚地址起始点。usertrapret函数最后调用kernel/trampoline.S下的userret函数

(5)userret函数最后执行sret指令,该指令将epc寄存器的值拷贝给pc,执行完后pc就被置为了0,开始执行initcode下的代码

P.S:
  usertrapret函数的最后两行代码:
  uint64 fn = TRAMPOLINE + (userret - trampoline);
  ((void (*)(uint64,uint64))fn)(TRAPFRAME, satp);

  其实就是执行了kernel/trampoline.S下的userret函数,详情见我的blog:(聊聊xv6中的trap https://www.cnblogs.com/KatyuMarisaBlog/p/13934537.html)

  如果你读xv6的源码时理清了这个过程,那么这个第一部分的思路就十分清晰了。

  (1)首先,如果CPU从一个线程切换到另一个线程,那么原来那个线程的上下文就必须要被保存下来,因此thread中需要添加一个新的数据结构,记录CPU从这个线程中被切换走时的上下文:

struct thread {
  char       stack[STACK_SIZE]; /* the thread's stack */
  int        state;             /* FREE, RUNNING, RUNNABLE */
  struct     threadcontext ctxt;    /* thread context */
};

  (2)当线程发生切换时,调用thread_Switch函数,将当前线程的上下文保存在ctxt中,将新的线程的ctxt加载到寄存器中。线程初始化时,将thread_create传入的函数指针赋值给ctxt.ra;这样当该线程第一次被调度时,uthread_Switch会将ctxt.ra写入到ra寄存器中,最后uthread_Switch执行ret指令,pc的值就被更新为ra的值,即线程开始执行它create时被传入的函数。

  总体的代码也很简单:

void 
thread_schedule(void)
{
  struct thread *t, *next_thread;

  /* Find another runnable thread. */
  next_thread = 0;
  t = current_thread + 1;
  for(int i = 0; i < MAX_THREAD; i++){
    if(t >= all_thread + MAX_THREAD)
      t = all_thread;
    if(t->state == RUNNABLE) {
      next_thread = t;
      break;
    }
    t = t + 1;
  }

  if (next_thread == 0) {
    printf("thread_schedule: no runnable threads\n");
    exit(-1);
  }

  if (current_thread != next_thread) {         /* switch threads?  */
    next_thread->state = RUNNING;
    t = current_thread;
    current_thread = next_thread;
    /* YOUR CODE HERE
     * Invoke thread_switch to switch from t to next_thread:
     * thread_switch(??, ??);
     */
    thread_switch((uint64)&t->ctxt, (uint64)&current_thread->ctxt);
  } else
    next_thread = 0;
}

void 
thread_create(void (*func)())
{
  struct thread *t;

  for (t = all_thread; t < all_thread + MAX_THREAD; t++) {
    if (t->state == FREE) break;
  }
  t->state = RUNNABLE;
  // YOUR CODE HERE
  memset(&t->ctxt, 0, sizeof(t->ctxt));
  t->ctxt.ra = (uint64)func;
  t->ctxt.sp = (uint64)(t->stack + STACK_SIZE);  // xv6的栈是从高地址向低地址增长的
}

  threadcontext的数据结构如下:

// user/uthread.c
struct threadcontext {
  uint64 ra;
  uint64 sp;

  // callee-saved
  uint64 s0;
  uint64 s1;
  uint64 s2;
  uint64 s3;
  uint64 s4;
  uint64 s5;
  uint64 s6;
  uint64 s7;
  uint64 s8;
  uint64 s9;
  uint64 s10;
  uint64 s11;
};

   总体代码量非常少,但当时把代码写到这里的时候我十分迷惑,为什么无论是进程切换,还是线程切换,都只保存了ra、sp和s0 ~ s11这些寄存器呢为什么其他寄存器不需要保护?哪怕是Hint,都在提示我们:

  • thread_switch needs to save/restore only the callee-save registers. Why?

   啥是callee-save啊.....查呗,能咋办。

caller-save寄存器和callee-save寄存器

   我还真的查到了这些东西。不过在前排提示一下,如果你不是对指令等东西非常感兴趣,十分不建议深究这部分内容,点到为止即可。stackoverflow上的答主也吐槽:
The caller-saved / callee-saved terminology is based on a pretty braindead inefficient model of programming where callers actually do save/restore all the call-clobbered registers (instead of keeping long-term-useful values elsewhere)......
源链接:
https://stackoverflow.com/questions/9268586/what-are-callee-and-caller-saved-registers
  我们首先想象一下某个情景:
void f() {
  .....
  save value to register r;
  g();
  use value in register r;
  .....  
}

void g() {
  .....
  save value to register r;
  .....
}

  函数f在执行的过程中,将某个值保存到了寄存器r中,打算在函数g执行完后,再利用r中的值去做某些运算,此时f不希望r的值在执行g时发生改变。但是g同样修改了寄存器r的值,这样f的执行就产生了错误的结果。可以预想,这样的调用是十分常见且不可避免的,更要命的是,f和g还可能定义在两个不同的ELF文件中,这样编译器连分析避免这种情况都做不到。

  当然我们也知道,现实中此类错误不会发生。即使一无所知的我,也可以针对上述情况提出两种解决方案:

(1)在调用g之前,把所有寄存器的值保存到栈上,等g执行结束后,从栈上读取数据,将寄存器恢复;

(2)和(1)类似,但这个工作做交给g来完成。当g开始执行时,就将寄存器全都保存到栈上,s等g结束后,从栈上将寄存器恢复;

  riscv的设计者糟道,你整这么没用的干啥,我们已经设计好了,某些寄存器要由caller(上例中的f)保存某些寄存器要由callee(上例中的g)保存

In addition to the argument and return value registers, seven integer registers t0–t6 and twelve floating-point registers ft0–ft11 are temporary registers that are volatile across calls and must be saved by the caller if later used. Twelve integer registers s0–s11 and twelve floating-point registers fs0–fs11 are preserved across calls and must be saved by the callee if used.

来源《riscv calling》,一般在你git clone得到的项目的下的doc中。

  那么caller和callee保存了这些寄存器后,什么时候恢复呢?

  

Caller-saved registers (AKA volatile registers, or call-clobbered) are used to hold temporary quantities that need not be preserved across calls.

For that reason, it is the caller's responsibility to push these registers onto the stack or copy them somewhere else if it wants to restore this value after a procedure call.

It's normal to let a call destroy temporary values in these registers, though.

Callee-saved registers (AKA non-volatile registers, or call-preserved) are used to hold long-lived values that should be preserved across calls.

When the caller makes a procedure call, it can expect that those registers will hold the same value after the callee returns, making it the responsibility of the callee to save them and restore them before returning to the caller. Or to not touch them.

源链接:

https://stackoverflow.com/questions/9268586/what-are-callee-and-caller-saved-registers

  我们来看xv6中的一个例子,下面这段汇编代码来自user/ls.asm:

// part of user/ls.asm

0000000000000400 <stat>: int stat(const char *n, struct stat *st) { 400: 1101 addi sp,sp,-32 402: ec06 sd ra,24(sp) 404: e822 sd s0,16(sp) 406: e426 sd s1,8(sp) 408: e04a sd s2,0(sp) 40a: 1000 addi s0,sp,32 40c: 892e mv s2,a1 int fd; int r; fd = open(n, O_RDONLY); 40e: 4581 li a1,0 410: 00000097 auipc ra,0x0 414: 172080e7 jalr 370(ra) # 582 <open> if(fd < 0) 418: 02054563 bltz a0,442 <stat+0x42> 41c: 84aa mv s1,a0 return -1; r = fstat(fd, st); 41e: 85ca mv a1,s2 420: 00000097 auipc ra,0x0 424: 17a080e7 jalr 378(ra) # 59a <fstat> 428: 892a mv s2,a0 close(fd); 42a: 8526 mv a0,s1 42c: 00000097 auipc ra,0x0 430: 13e080e7 jalr 318(ra) # 56a <close> return r; } 434: 854a mv a0,s2 436: 60e2 ld ra,24(sp) 438: 6442 ld s0,16(sp) 43a: 64a2 ld s1,8(sp) 43c: 6902 ld s2,0(sp) 43e: 6105 addi sp,sp,32 440: 8082 ret return -1; 442: 597d li s2,-1 444: bfc5 j 434 <stat+0x34>

  注意如果执行到了stat的代码,则stat必定被某个函数所调用,因此stat是一个callee;与此同时stat还会调用其他函数,因此stat也是一个caller

  作为callee,stat保存了callee-saved寄存器s1、s2,并在调用结束后从栈上恢复了s1和s2的值;

  作为caller,stat保存了sp和ra(虽然doc中没说这个要保存,但显而易见这两个寄存器必须由caller保存);

  emmm.....好像有点扯远了,不过我们只需要记住,某些寄存器是由caller保存的,另一些是由callee保存的就行了。

  现在我们回到xv6中,讨论xv6中进程课线程切换为什么只需要保存s0 ~ s11和ra、sp

  首先说说进程切换。进程切换调用的是kernel/swtch.S下的swtch函数,这个函数被两个地方所调用:

void
scheduler(void)
{
  struct proc *p;
  struct cpu *c = mycpu();
  
  c->proc = 0;
  for(;;){
    // Avoid deadlock by giving devices a chance to interrupt.
    intr_on();

    // Run the for loop with interrupts off to avoid
    // a race between an interrupt and WFI, which would
    // cause a lost wakeup.
    intr_off();

    int found = 0;
    for(p = proc; p < &proc[NPROC]; p++) {
      acquire(&p->lock);
      if(p->state == RUNNABLE) {
        // Switch to chosen process.  It is the process's job
        // to release its lock and then reacquire it
        // before jumping back to us.
        p->state = RUNNING;
        c->proc = p;
        swtch(&c->scheduler, &p->context);
        .....
}

  scheduler中的swtch,将当前处于scheduler中的现场保存在c->scheduler中,加载本轮调度中被选择的进程p的context;swtch结束后,程序流跳转到p上次被切换出的地方(即sched函数后一行);

void
sched(void)
{
  int intena;
  struct proc *p = myproc();

  if(!holding(&p->lock))
    panic("sched p->lock");
  if(mycpu()->noff != 1)
    panic("sched locks");
  if(p->state == RUNNING)
    panic("sched running");
  if(intr_get())
    panic("sched interruptible");

  intena = mycpu()->intena;
  swtch(&p->context, &mycpu()->scheduler);
  mycpu()->intena = intena;
}

  sched中的swtch,将处于函数sched的保存到p->context下,加载c->scheduler;swtch结束后,程序流回到scheduler中;

  所以说我们为啥不用保存caller save的寄存器,只需要保存callee save的寄存器?因为swtch函数对于schedluer和sched来说就是一个callee!调用swtch前,需要保存的caller save的寄存器,早已经被保存好了!

  对于线程调度来说也是一样的,只不过swtch函数被换成了thread_switch而已,thread_switch对于调用它的函数来说,也是一个callee

    至此,我也终于搞懂了为啥进程、线程切换时只需保存callee寄存器。

  那这个part1就没啥让我迷惑的地方了,user/thread_switch.S的代码直接原样抄kernel/trampoline.S就可以,除了最后一条指令是ret而非sret:

// user/uthread_switch.S
    .text

    /*
         * save the old thread's registers,
         * restore the new thread's registers.
         */

    .globl thread_switch
thread_switch:
    /* YOUR CODE HERE */
    sd ra, 0(a0)
    sd sp, 8(a0)
    sd s0, 16(a0)
    sd s1, 24(a0)
    sd s2, 32(a0)
    sd s3, 40(a0)
    sd s4, 48(a0)
    sd s5, 56(a0)
    sd s6, 64(a0)
    sd s7, 72(a0)
    sd s8, 80(a0)
    sd s9, 88(a0)
    sd s10, 96(a0)
    sd s11, 104(a0)

    ld ra, 0(a1)
    ld sp, 8(a1)
    ld s0, 16(a1)
    ld s1, 24(a1)
    ld s2, 32(a1)
    ld s3, 40(a1)
    ld s4, 48(a1)
    ld s5, 56(a1)
    ld s6, 64(a1)
    ld s7, 72(a1)
    ld s8, 80(a1)
    ld s9, 88(a1)
    ld s10, 96(a1)
    ld s11, 104(a1)

    ret    /* return to ra */

改完代码后直接运行uthread,测试通过:

qemu-system-riscv64 -machine virt -bios none -kernel kernel/kernel -m 128M -smp 1 -nographic -drive file=fs.img,if=none,format=raw,id=x0 -device virtio-blk-device,drive=x0,bus=virtio-mmio-bus.0

xv6 kernel is booting

virtio disk init 0
init: starting sh
$ uthread
thread_a started
thread_b started
thread_c started
thread_c 0
thread_a 0
thread_b 0
thread_c 1
thread_a 1
thread_b 1
thread_c 2
thread_a 2
thread_b 2
thread_c 3
thread_a 3
thread_b 3

.....

thread_a 98
thread_b 98
thread_c 99
thread_a 99
thread_b 99
thread_c: exit after 100
thread_a: exit after 100
thread_b: exit after 100
thread_schedule: no runnable threads
$ 

 Part2 Alarm

   首先回顾一下xv6中进程的切换过程:

(1)产生时钟中断(trap),强行打断本进程的执行流,本进程跳转至usertrap中;

(2)usertrap检查trap的原因,发现是时钟中断,本进程调用yield;

(3)yield会调用sched,sched调用swtch,保存本进程当前的上下文至p->context中,从mycpu()->schedluer中加载处于scheduler中的上下文;swtch结束后,程序执行流来到scheduler中;

(4)scheduler选择要调度的下一个进程,再次调用swtch,将处于scheduler中的上下文保存到mycpu()->scheduler中,加载被选中进程的上下文,被选中的进程开始执行

 注意到进程总是在sched中被切出,因此进程重新被调度时,pc正好指向swtch后面一行

void
sched(void)
{
  int intena;
  struct proc *p = myproc();

  if(!holding(&p->lock))
    panic("sched p->lock");
  if(mycpu()->noff != 1)
    panic("sched locks");
  if(p->state == RUNNING)
    panic("sched running");
  if(intr_get())
    panic("sched interruptible");
 
  intena = mycpu()->intena;
 
 swtch(&p->context, &mycpu()->scheduler);  -----------> 进程在执行到这一行代码后被切换掉,调度器重新选择一个进程来调度
  mycpu()->intena = intena;           -----------> 当这个进程重新被调度时,从这一行开始执行
}

  从一个时钟中断打断本进程的执行开始,到本进程重新得到调度的这一轮过程,记为一个“tick”。Part 2要求我们给进程添加一个功能,即当进程的tick满足一定条件时,调用一个被注册的函数sighandler,sighandler打印一行“alarm”后执行结束,继续执行本进程的代码。Lab为我们提供的sigalarm的函数原型为int sigalarm(int ticks, void(*handler));当进程经过的总tick % ticks == 0时,调用handler。

  刚开始看Part2的时候,我觉得这个Part2就是白送的,猜测只需要在usertrap修改几行代码,再给kernel/proc.h添加几个字段,就能解决。一步一步按照Hint做:

 

  • You'll need to modify the Makefile to cause alarmtest.c to be compiled as an xv6 user program.
  • The right declarations to put in user/user.h are:
        int sigalarm(int ticks, void (*handler)());
        int sigreturn(void);
    
  • Update user/usys.pl (which generates user/usys.S), kernel/syscall.h, and kernel/syscall.c to allow alarmtest to invoke the sigalarm and sigreturn system calls.
  • For now, your sys_sigreturn should just return zero.
  • Your sys_sigalarm() should store the alarm interval and the pointer to the handler function in new fields in the proc structure (in kernel/proc.h).
  • You'll need to keep track of how many ticks have passed since the last call (or are left until the next call) to a process's alarm handler; you'll need a new field in struct proc for this too. You can initialize proc fields in allocproc() in proc.c.
  • Every tick, the hardware clock forces an interrupt, which is handled in usertrap(); you should add some code here.
  • You only want to manipulate a process's alarm ticks if there's a timer interrupt; you want something like
        if(which_dev == 2) ...
    
  • Only invoke the alarm function if the process has a timer outstanding. Note that the address of the user's alarm function might be 0 (e.g., in alarmtest.asm, periodic is at address 0).
  • It will be easier to look at traps with gdb if you tell qemu to use only one CPU, which you can do by running
        make CPUS=1 qemu-gdb
    
  • You've succeeded if alarmtest prints "alarm!".

  改一改proc.c,执行sigalarm时,将sighandler注册在进程中:

// kernel/proc.h
// Per-process state
struct proc {
  struct spinlock lock;

  // p->lock must be held when using these:
  enum procstate state;        // Process state
  struct proc *parent;         // Parent process
  void *chan;                  // If non-zero, sleeping on chan
  int killed;                  // If non-zero, have been killed
  int xstate;                  // Exit status to be returned to parent's wait
  int pid;                     // Process ID

  // these are private to the process, so p->lock need not be held.
  uint64 kstack;               // Virtual address of kernel stack
  uint64 sz;                   // Size of process memory (bytes)
  pagetable_t pagetable;       // Page table
  struct trapframe *tf;        // data page for trampoline.S
  struct context context;      // swtch() here to run process
  struct file *ofile[NOFILE];  // Open files
  struct inode *cwd;           // Current directory
  char name[16];               // Process name (debugging)

  int sigpass;          // if sigpass is 1, do not invoke sighandler
  int round;                   // if round is 0, no sighander is registered
  int ticks;               //  total ticks after registered sighandler, if ticks % round == 0,invoke sighandler
  void (*sighandler);  
};

  再改几个文件,添加新的系统调用sys_sigalarm和sys_sigreturn:

// kernel/syscall.c
int sigalarm(int, void (*)());
int sigreturn(void);

uint64
sys_sigalarm(void)
{
  int ticks;
  void (*handler);
  if(argint(0, &ticks) < 0 || argaddr(1, (uint64*)&handler) < 0)
    return -1;
  return sigalarm(ticks, handler);
}

uint64
sys_sigreturn(void)
{
  return sigreturn();
}

int
sigalarm(int ticks, void (*handler)())
{
  struct proc* p = myproc();
  p->round = ticks;
  p->sighandler = handler;
  p->ticks = 0;
  return 0;
}

int
sigreturn(void)
{
  return 0;
}

  最后改一改usertrap的代码,当发现满足触发sighandler的条件时,从进程的页表里拿到sighandler的实地址hpa,然后执行它:

  } else if((which_dev = devintr()) != 0){
    if(2 == which_dev) {
      if(p->round != 0) {
        if(++p->ticks % p->round == 0) {
          uint64 hpa = walkaddr(p->pagetable, (uint64)p->sighandler);
          if(0 == hpa) {
            panic("sighandler");
          }
          ((void(*)(void))hpa)();
        }
      }
    }
    // ok

  写完后心中就喊了一万个不妙.....第一是内核代码的虚实地址又乱成一团浆糊了,代码是虚地址,hpa是实地址,这样肯定会炸;第二是这代码的缩进超过了5个,我心口闷得慌.....

  一运行,果然panic了,panic信息告诉我,hpa的地址是0x87f7b00,访问这个地址触发了instruction page fault。

  不过这个panic是个好的panic,把我的思路彻底盘活了。

分析与设计

  首先要知道panic的原因;注意到sighandler是一个在用户态的函数,在内核态执行这个函数是肯定不行的;这个Lab最大的难点有以下几个:

sighandler必须在用户态执行;

sighandler执行完毕后,需要让进程从被打断的地方继续执行;

当进程执行sighandler时,仍然可以被时钟trap打断。测试代码要求,如果sighandler还没执行完,tick又满足了sighandler的执行条件,应跳过这个sighanler的执行;

  首先要设计出一次完整的sighandler的调用和返回的流程,我的设计大概如下:

流程图1:

—————————————                                     —————————————————————            ————————————————
| S MODE    |                                     | S MODE             |           |  U MODE    |
| usertrap  |           ————————————————————>     | usertrapret        |   ————————————>  |  sighandler  |
|___________|                                     |____________________|           ——————————————————
usertrap检查myproc()->ticks,                       usertrapret将pc值置为              执行sighandler
发现需要执行sighandler                              sighandler的值,返回用户态
修改trapframe,调用usertrapret

流程图2:

————————————————                                  ——————————————————                        ————————————————
| U Mode       |                                  |     S MODE      |                       |   U MODE      |
| sighandler   |          ————————————————>       |   sys_sigreturn |           —————————>  | 被打断的函数    |
| sigreturn    |                                  |   usertrapret   |                       —————————————————
————————————————                                  ——————————————————
sighandler调用sigreturn,                         sigreturn置sigpass为0,
sigreturn是一个系统调用,                           恢复触发sighandler时
触发trap,进入sys_sigreturn                        的现场,调用usertrapret返回用户态

  有了这个流程图后,设计相应的代码也变得相对容易了不少。首先我们必须让进程拥有两个trapframe:

struct proc {
  struct spinlock lock;

  // p->lock must be held when using these:
  enum procstate state;        // Process state
  struct proc *parent;         // Parent process
  void *chan;                  // If non-zero, sleeping on chan
  int killed;                  // If non-zero, have been killed
  int xstate;                  // Exit status to be returned to parent's wait
  int pid;                     // Process ID

  // these are private to the process, so p->lock need not be held.
  uint64 kstack;               // Virtual address of kernel stack
  uint64 sz;                   // Size of process memory (bytes)
  pagetable_t pagetable;       // Page table
  struct trapframe *tf;        // data page for trampoline.S
  struct context context;      // swtch() here to run process
  struct file *ofile[NOFILE];  // Open files
  struct inode *cwd;           // Current directory
  char name[16];               // Process name (debugging)

  int sigpass;
  int round;                   // if round is 0, no sighander is registered
  int ticks;          
  void (*sighandler);
  struct trapframe sigtrapframe;    
};

  在第一步修改p->tf之前,我们要把p->tf拷贝一份到sigtrapframe中;当调用sigreturn时,利用sigtrapframe恢复被sighandler打断时的现场;

  我们修改trapframe的目的是希望usertrapret返回时将pc更新为traphandler,但usertrapret返回用户态的最后一步使用的指令是sret,而不是ret;sret指令会将epc的值拷贝到pc中,而usertrapret会用p->tf->epc更新epc寄存器,因此我们在usertrap中要修改p->tf->epc,而不是修改p->tf->ra:

void
usertrap(void)
{
  int which_dev = 0;

  if((r_sstatus() & SSTATUS_SPP) != 0)
    panic("usertrap: not from user mode");

  // send interrupts and exceptions to kerneltrap(),
  // since we're now in the kernel.
  w_stvec((uint64)kernelvec);

  struct proc *p = myproc();
  
  // save user program counter.
  p->tf->epc = r_sepc();
  
  if(r_scause() == 8){
    // system call

    if(p->killed)
      exit(-1);

    // sepc points to the ecall instruction,
    // but we want to return to the next instruction.
    p->tf->epc += 4;

    // an interrupt will change sstatus &c registers,
    // so don't enable until done with those registers.
    intr_on();

    syscall();
  } else if((which_dev = devintr()) != 0){
    if(2 == which_dev) {
      if(p->round != 0 && p->sigpass == 0) {
        p->sigpass = 1;                      // skip reentrant
        p->sigtrapframe = *p->tf;       // copy current trapframe
        p->tf->epc = (uint64)p->sighandler;  // set p->tf->ra is useless
      }
    }
    // ok
  } else {
    printf("usertrap(): unexpected scause %p (%s) pid=%d\n", r_scause(), scause_desc(r_scause()), p->pid);
    printf("            sepc=%p stval=%p\n", r_sepc(), r_stval());
    p->killed = 1;
  }

  if(p->killed)
    exit(-1);

  // give up the CPU if this is a timer interrupt.
  // if p->sigpass is 1, then don't yield
  if(which_dev == 2 && p->sigpass != 1)
    yield();

  usertrapret();
}

  这样当usertrapret执行完后返回U Mode,进程就会执行sighandler,流程图1已经实现,接下来我们实现流程图2,修改sigreturn。

  sigreturn需要置sigpass为0,并恢复进程被sighandler时的现场,只需要将sigtrapframe重新赋值给trapframe,然后调用usertrapret即可:

  这样调用usertrapret后,进程就从被sighandler打断的那个地方重新开始运行了;

uint64
sys_sigalarm(void)
{
  int ticks;
  void (*handler);
  if(argint(0, &ticks) < 0 || argaddr(1, (uint64*)&handler) < 0)
    return -1;
  return sigalarm(ticks, handler);
}

uint64
sys_sigreturn(void)
{
  return sigreturn();
}

int
sigalarm(int ticks, void (*handler)())
{
  struct proc* p = myproc();
  p->round = ticks;
  p->sighandler = handler;
  p->ticks = 0;
  return 0;
}

int
sigreturn(void)
{
  struct proc* p = myproc();
  p->sigpass = 0;
  *(p->tf) = p->sigtrapframe;
  memset(&p->sigtrapframe, 0, sizeof(struct trapframe));
  usertrapret();
  return 0;
}

  我们可以看到,虽然Part2的代码量很少,但思维量还是比较大的。不过只要意识到sighandler需要在用户态执行,想出这个流程并不是什么难事。

  ok,重新编译一下,测试全部通过:

ms@ubuntu:~/public/MIT 6.S081/Lab6 user level thread/xv6-riscv-fall19$ make qemu
qemu-system-riscv64 -machine virt -bios none -kernel kernel/kernel -m 128M -smp 1 -nographic -drive file=fs.img,if=none,format=raw,id=x0 -device virtio-blk-device,drive=x0,bus=virtio-mmio-bus.0

xv6 kernel is booting

virtio disk init 0
init: starting sh
$ alarmtest
test0 start
...alarm!
test0 passed
test1 start
..alarm!
.alarm!
.alarm!
.alarm!
.alarm!
..alarm!
.alarm!
.alarm!
.alarm!
.alarm!
test1 passed
test2 start
.......alarm!
test2 passed
$

 

$ usertests
usertests starting
test reparent2: OK
test pgbug: OK
test sbrkbugs: usertrap(): unexpected scause 0x000000000000000c (instruction page fault) pid=3209
            sepc=0x00000000000044b0 stval=0x00000000000044b0
usertrap(): unexpected scause 0x000000000000000c (instruction page fault) pid=3210
            sepc=0x00000000000044b0 stval=0x00000000000044b0
OK
test badarg: OK
test reparent: OK
test twochildren: OK
test forkfork: OK
test forkforkfork: OK
test argptest: OK
test createdelete: OK
......
test sbrkfail: usertrap(): unexpected scause 0x000000000000000d (load page fault) pid=6181
            sepc=0x0000000000003982 stval=0x0000000000010000
OK
test sbrkarg: OK
test validatetest: OK
test stacktest: usertrap(): unexpected scause 0x000000000000000d (load page fault) pid=6185
            sepc=0x00000000000007ce stval=0x000000000000dc60
OK
test opentest: OK
test writetest: OK
test writebig: OK
test createtest: OK
test openiput: OK
test exitiput: OK
test iput: OK
test mem: OK
test pipe1: OK
test preempt: kill... wait... OK
test exitwait: OK
test rmdot: OK
test fourteen: OK
test bigfile: OK
test dirfile: OK
test iref: OK
test forktest: OK
test bigdir: OK
ALL TESTS PASSED

如果喜欢或者觉得有帮助请给个赞吧,写这些东西真的挺费脑细胞和时间的....

posted @ 2020-11-10 23:46  KatyuMarisa  阅读(935)  评论(0编辑  收藏  举报