Geforce 8400显卡温度的获取


1 在安装完lm_sensors后,利用sensors命令显示结果如下:

nouveau-pci-0100
Adapter: PCI adapter
temp1:        +80.0°C  (high = +100.0°C, crit = +110.0°C)

说明通过内核可以读出显卡温度,并且内核提供了用户态接口文件(位于/sys/devices/pci0000:00/0000:00:01.0/0000.01.00.0/temp1_input),而且pci总线上01:00.0对应显卡设备

2  确定显卡温度信息文件所在的系统位置
   系统监控信息都在/sys/class/hwmon目录下,通过ls –l /sys/class/hwmon显示其下面的情况
lrwxrwxrwx. 1 root root 0 Mar 23 13:24 /sys/class/hwmon/hwmon0 -> sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0/hwmon/hwmon0
lrwxrwxrwx. 1 root root 0 Mar 23 13:25 /sys/class/hwmon/hwmon1 -> sys/devices/pci0000:00/0000:00:18.3/hwmon/hwmon1
通过红色部分,可以断定,显卡温度文件位于 sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0目录下
3  查看该显卡的信息,说明pci 01:00.0设备确实为nvidia设备
$ lspci -s 01:00.0
01:00.0 VGA compatible controller: nVidia Corporation GT216 [GeForce GT 220] (rev a2)
4 确定显卡温度读取的内核函数
上面既然能够定位出显卡温度对应的文件,那么该文件的读操作函数即是内核读取显卡温度的函数,但是如何查找该函数呢?
4.1 确定nvidia显卡设备的驱动程序
方法一:在上面第2部中,得到了显卡温度信息文件所在的系统目录为:/sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0,进入该显卡驱动目录下,里面的name文件即是显卡驱动的名称,为nouveau;方法二 sensors命令显示的第一行内容为:nouveau-pci-0100,各项依次表示:
驱动模块名称-总线类型-总线地址,所以显卡驱动程序为nouveau.ko
利用find /lib/modules/`uname -r` -name nouveau.ko得到该显卡驱动模块在系统源码中的相对位置

lib/modules/`uname -r`/kernel/drivers/gpu/drm/nouveau/nouveau.ko

4.2 在最新的内核源代码linux-3.3-2中查找该驱动对应的文件,确定对应的温度读取函数
nouveau初始化函数 nouveau_drv.c

1 static int __init nouveau_init(void)
2 { 
3 ...
4 nouveau_register_dsm_handler();
5 return drm_pci_init(&driver, &nouveau_pci_driver);
7 }
 1 /**
 2 * PCI device initialization. Called direct from modules at load time.
 3 *
 4 * \return zero on success or a negative number on failure.
 5 */
 6 int drm_pci_init(struct drm_driver *driver, struct pci_driver *pdriver)
 7 {
 8 struct pci_dev *pdev = NULL;
 9 const struct pci_device_id *pid;
10 int i;
11 
12 for (i = 0; pdriver->id_table[i].vendor != 0; i++) {
13 pid = &pdriver->id_table[i]; //pci_driver->id_table里保存了nvidia显卡相关的关键字,vendor,device,subvendor等等信息
14 
15 /* Loop around setting up a DRM device for each PCI device
16 * matching our ID and device class. If we had the internal
17 * function that pci_get_subsys and pci_get_class used, we'd
18 * be able to just pass pid in instead of doing a two-stage
19 * thing.
20 */
21 pdev = NULL;
22 while ((pdev =   //遍历pci总线上现在挂载的设备列表,如果nvidia显卡挂载在pci总线下,返回nvidia设备对应的struct pci_dev*结构
23 pci_get_subsys(pid->vendor, pid->device, pid->subvendor, 
24 pid->subdevice, pdev)) != NULL) {  
25 if ((pdev->class & pid->class_mask) != pid->class)
26 continue;
27 
28 /* stealth mode requires a manual probe */
29 pci_dev_get(pdev);
30 drm_get_pci_dev(pdev, pid, driver);  //加载该pci对应的drm_driver驱动
31 }
32 }
33 return 0;
34 }

其中nvidia显卡对应的匹配关键信息为:

 1 static struct pci_device_id pciidlist[] = {
 2 {
 3 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID),
 4 .class = PCI_BASE_CLASS_DISPLAY << 16,
 5 .class_mask = 0xff << 16,
 6 },
 7 {
 8 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA_SGS, PCI_ANY_ID),
 9 .class = PCI_BASE_CLASS_DISPLAY << 16,
10 .class_mask = 0xff << 16,
11 },
12 {}
13 };
14 
15 /*
16 * PCI_DEVICE - macro used to describe a specific pci device
17 * @vend: the 16 bit PCI Vendor ID
18 * @dev: the 16 bit PCI Device ID
19 *
20 * This macro is used to create a struct pci_device_id that matches a
21 * specific device. The subvendor and subdevice fields will be set to
22 * PCI_ANY_ID.
23 *
24 #define PCI_DEVICE(vend,dev) \
25 .vendor = (vend), .device = (dev), \
26 .subvendor = PCI_ANY_ID, .subdevice = PCI_ANY_ID
27 */

匹配成功后,加载驱动过程如下所示:


 1 对显卡设备struct pci_dev结构进行初始化,该结构对应于pci bus总线,每个pci设备对应一个该设备。
 2 分配struct drm_device*显卡结构,该结构对应于特定的显卡驱动,并初始化
 3 建立struct pci_dev*和struct drm_device*关联,pci_dev*->struct device*->device_private* driver_data =
struct drm_device
4 显卡设备和显卡驱动的关系并且执行显卡驱动的加载功能,即执行nouveau_load函数
 1 int drm_get_pci_dev(struct pci_dev *pdev, const struct pci_device_id *ent,struct drm_driver *driver)
 3 {
 4 
 5 ....
 6 mutex_lock(&drm_global_mutex);
 7 if ((ret = drm_fill_in_dev(dev, ent, driver))) { struct drm_device和driver建立关系
 8 
 9 printk(KERN_ERR "DRM: Fill_in_dev failed.\n");
10 goto err_g2;
11 }
12 
13 if (dev->driver->load) {
14 ret = dev->driver->load(dev, ent->driver_data);
15 if (ret)
16 goto err_g4;
17 }
18 ...
19 }

驱动装载过程:

 1 int nouveau_load(struct drm_device *dev, unsigned long flags)
 2 
 3 {
 4 
 5 struct drm_nouveau_private *dev_priv;
 6 uint32_t reg0, strap;
 7 resource_size_t mmio_start_offs;
 8 int ret;
 9 
10 dev_priv = kzalloc(sizeof(*dev_priv), GFP_KERNEL);
11 
12 ...
13 
14 dev->dev_private = dev_priv;
15 dev_priv->dev = dev;
16 
17 dev_priv->flags = flags & NOUVEAU_FLAGS;
18 
19 NV_DEBUG(dev, "vendor: 0x%X device: 0x%X class: 0x%X\n",
20 dev->pci_vendor, dev->pci_device, dev->pdev->class);
21 
22 /* resource 0 is mmio regs */
23 /* resource 1 is linear FB */
24 /* resource 2 is RAMIN (mmio regs + 0x1000000) */
25 /* resource 6 is bios */
26 
27 /* map the mmio regs */
28 mmio_start_offs = pci_resource_start(dev->pdev, 0);        //显卡外设地址空间映射到内核虚存空间中
29 dev_priv->mmio = ioremap(mmio_start_offs, 0x00800000);
30 ...
31 
32 /* Time to determine the card architecture */
33 reg0 = nv_rd32(dev, NV03_PMC_BOOT_0);
34 
35 /* We're dealing with >=NV10 */
36 if ((reg0 & 0x0f000000) > 0) {
37 /* Bit 27-20 contain the architecture in hex */
38 dev_priv->chipset = (reg0 & 0xff00000) >> 20;
39 /* NV04 or NV05 */
40 } else if ((reg0 & 0xff00fff0) == 0x20004000) {
41 if (reg0 & 0x00f00000)
42 dev_priv->chipset = 0x05;
43 else
44 dev_priv->chipset = 0x04;
45 } else
46 dev_priv->chipset = 0xff;
47 
48 switch (dev_priv->chipset & 0xf0) {
49 case 0x00:
50 case 0x10:
51 case 0x20:
52 case 0x30:
53 dev_priv->card_type = dev_priv->chipset & 0xf0;
54 break;
55 case 0x40:
56 case 0x60:
57 dev_priv->card_type = NV_40;
58 break;
59 case 0x50:
60 case 0x80:
61 case 0x90:
62 case 0xa0:
63 dev_priv->card_type = NV_50;
64 break;
65 case 0xc0:
66 dev_priv->card_type = NV_C0;
67 break;
68 case 0xd0:
69 dev_priv->card_type = NV_D0;
70 break;
71 default:
72 NV_INFO(dev, "Unsupported chipset 0x%08x\n", reg0);
73 ret = -EINVAL;
74 goto err_mmio;
75 }
76 
77 NV_INFO(dev, "Detected an NV%2x generation card (0x%08x)\n",
78 dev_priv->card_type, reg0);
79 ...
80 
81 /* For kernel modesetting, init card now and bring up fbcon */
82 ret = nouveau_card_init(dev);//nvidia特定类型显卡初始化过程
83 ...
84 }

特定类型显卡初始化过程

 1 int nouveau_card_init(struct drm_device *dev){
 2 ...
 3 
 4 /* Initialise internal driver API hooks */
 5 ret = nouveau_init_engine_ptrs(dev);
 6 ...
 7 }
 8 
 9 static int nouveau_init_engine_ptrs(struct drm_device *dev)
10 {
11 struct drm_nouveau_private *dev_priv = dev->dev_private;
12 struct nouveau_engine *engine = &dev_priv->engine;
13 
14 switch (dev_priv->chipset & 0xf0) {
15 ...
16 
17 case 0x50:
18 case 0x80: /* gotta love NVIDIA's consistency.. */
19 case 0x90:
20 case 0xa0:
21 engine->instmem.init     = nv50_instmem_init;
22 engine->instmem.takedown    = nv50_instmem_takedown;
23 engine->instmem.suspend     = nv50_instmem_suspend;
24 engine->instmem.resume     = nv50_instmem_resume;
25 engine->instmem.get     = nv50_instmem_get;
26 engine->instmem.put     = nv50_instmem_put;
27 engine->instmem.map     = nv50_instmem_map;
28 engine->instmem.unmap     = nv50_instmem_unmap;
29 if (dev_priv->chipset == 0x50)
30 engine->instmem.flush    = nv50_instmem_flush;
31 else
32 engine->instmem.flush    = nv84_instmem_flush;
33 engine->mc.init     = nv50_mc_init;
34 engine->mc.takedown     = nv50_mc_takedown;
35 engine->timer.init     = nv04_timer_init;
36 engine->timer.read     = nv04_timer_read;
37 engine->timer.takedown     = nv04_timer_takedown;
38 engine->fb.init     = nv50_fb_init;
39 engine->fb.takedown     = nv50_fb_takedown;
40 engine->fifo.channels     = 128;
41 engine->fifo.init     = nv50_fifo_init;
42 engine->fifo.takedown     = nv50_fifo_takedown;
43 engine->fifo.disable     = nv04_fifo_disable;
44 engine->fifo.enable     = nv04_fifo_enable;
45 engine->fifo.reassign     = nv04_fifo_reassign;
46 engine->fifo.channel_id     = nv50_fifo_channel_id;
47 engine->fifo.create_context    = nv50_fifo_create_context;
48 engine->fifo.destroy_context    = nv50_fifo_destroy_context;
49 engine->fifo.load_context    = nv50_fifo_load_context;
50 engine->fifo.unload_context    = nv50_fifo_unload_context;
51 engine->fifo.tlb_flush     = nv50_fifo_tlb_flush;
52 engine->display.early_init    = nv50_display_early_init;
53 engine->display.late_takedown    = nv50_display_late_takedown;
54 engine->display.create     = nv50_display_create;
55 engine->display.destroy     = nv50_display_destroy;
56 engine->display.init     = nv50_display_init;
57 engine->display.fini     = nv50_display_fini;
58 engine->gpio.init     = nv50_gpio_init;
59 engine->gpio.fini     = nv50_gpio_fini;
60 engine->gpio.drive     = nv50_gpio_drive;
61 engine->gpio.sense     = nv50_gpio_sense;
62 engine->gpio.irq_enable     = nv50_gpio_irq_enable;
63 switch (dev_priv->chipset) {
64 case 0x84:
65 case 0x86:
66 case 0x92:
67 case 0x94:
68 case 0x96:
69 case 0x98:
70 case 0xa0:
71 case 0xaa:
72 case 0xac:
73 case 0x50:
74 engine->pm.clocks_get    = nv50_pm_clocks_get;
75 engine->pm.clocks_pre    = nv50_pm_clocks_pre;
76 engine->pm.clocks_set    = nv50_pm_clocks_set;
77 break;
78 default:
79 engine->pm.clocks_get    = nva3_pm_clocks_get;
80 engine->pm.clocks_pre    = nva3_pm_clocks_pre;
81 engine->pm.clocks_set    = nva3_pm_clocks_set;
82 break;
83 }
84 engine->pm.voltage_get     = nouveau_voltage_gpio_get;
85 engine->pm.voltage_set     = nouveau_voltage_gpio_set;
86 if (dev_priv->chipset >= 0x84)
87 engine->pm.temp_get    = nv84_temp_get;,显卡类型的获取,见后面实现程序
88 else
89 engine->pm.temp_get    = nv40_temp_get;
90 engine->pm.pwm_get     = nv50_pm_pwm_get;
91 engine->pm.pwm_set     = nv50_pm_pwm_set;
92 engine->vram.init     = nv50_vram_init;
93 engine->vram.takedown     = nv50_vram_fini;
94 engine->vram.get     = nv50_vram_new;
95 engine->vram.put     = nv50_vram_del;
96 engine->vram.flags_valid    = nv50_vram_flags_valid;
97 break;
98 ...
99 }

可见实际的读取温度的函数为nv84_temp_get,该函数定义如下:

 1 int
 2 nv84_temp_get(struct drm_device *dev)
 3 {
 4 return nv_rd32(dev, 0x20400);
 5 }
 6 
 7 static inline u32 nv_rd32(struct drm_device *dev, unsigned reg)
 8 {
 9 struct drm_nouveau_private *dev_priv = dev->dev_private;
10 return ioread32_native(dev_priv->mmio + reg);
11 }

#define ioread16_native ioread16be
#define iowrite16_native iowrite16be
#define ioread32_native ioread32be
#define iowrite32_native iowrite32be
#else /* def __BIG_ENDIAN */
#define ioread16_native ioread16
#define iowrite16_native iowrite16
#define ioread32_native ioread32      在头文件<asm-generic/iomap.h>中声明
#define iowrite32_native iowrite32
#endif /* def __BIG_ENDIAN else */

所以利用该函数即可实现温度信息的获取,也可以利用外设寄存器虚拟内存映射后形成的通用操作函数读取:
readl(mmio+reg); == ioread32_native(mmio+reg);
这种方式有如下形式:包含io.h头文件即可
define readb(addr) (*(volatile unsigned char *) __io_virt(addr))
#define readw(addr) (*(volatile unsigned short *) __io_virt(addr))
#define readl(addr) (*(volatile unsigned int *) __io_virt(addr))

#define writeb(b,addr) (*(volatile unsigned char *) __io_virt(addr) = (b))
#define writew(b,addr) (*(volatile unsigned short *) __io_virt(addr) = (b))
#define writel(b,addr) (*(volatile unsigned int *) __io_virt(addr) = (b))

#define memset_io(a,b,c) memset(__io_virt(a),(b),(c))
#define memcpy_fromio(a,b,c) memcpy((a),__io_virt(b),(c))
#define memcpy_toio(a,b,c) memcpy(__io_virt(a),(b),(c))

4 编写内核模块,实现显卡温度的读取


  1 #include <linux/device.h> //inlcude bus_find_device,struct device
  2 #include <linux/fs.h>
  3 #include <linux/compiler.h> //__iomm
  4 #include <asm-generic/iomap.h>//ioread32,ioread32be
  5 #include <linux/types.h>
  6 #include <net/net_namespace.h>
  7 #include <linux/module.h>
  8 #include <linux/kernel.h> /* We're doing kernel work */
  9 #include <linux/pci.h> //pci_resource_start,first we should get struct pci_dev,function no_pci_devices
 10 #include <asm/io.h> // include ioremap()
 11 #include <linux/mod_devicetable.h> //include struct pci_device_id
 12 /*
 13 from nouveau_drv.c
 14 struct pci_device_id {
 15 __u32 vendor, device;     // Vendor and device ID or PCI_ANY_ID
 16 __u32 subvendor, subdevice;    // Subsystem ID's or PCI_ANY_ID
 17 __u32 class, class_mask;    // (class,subclass,prog-if) triplet
 18 kernel_ulong_t driver_data;    // Data private to the driver
 19 };
 20 */
 21 #include <linux/pci.h> //include PCI_DEVICE(vend,dev),to_pci_dev(n)
 22 /*
 23 * PCI_DEVICE - macro used to describe a specific pci device
 24 * @vend: the 16 bit PCI Vendor ID
 25 * @dev: the 16 bit PCI Device ID
 26 *
 27 * This macro is used to create a struct pci_device_id that matches a
 28 * specific device. The subvendor and subdevice fields will be set to
 29 * PCI_ANY_ID.
 30 *
 31 #define PCI_DEVICE(vend,dev) \
 32 .vendor = (vend), .device = (dev), \
 33 .subvendor = PCI_ANY_ID, .subdevice = PCI_ANY_ID
 34 */
 35 #include <linux/pci_ids.h>//include PCI_VENDOR_ID_NVIDIA
 36 MODULE_LICENSE("GPL");
 37 MODULE_AUTHOR("JIAN ZHOU");
 38 MODULE_DESCRIPTION("NVIDIA THERMAL");
 39 static struct pci_device_id pciidlist[] = {
 40 {
 41 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID),
 42 .class = PCI_BASE_CLASS_DISPLAY << 16,
 43 .class_mask = 0xff << 16,
 44 },
 45 {
 46 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA_SGS, PCI_ANY_ID),
 47 .class = PCI_BASE_CLASS_DISPLAY << 16,
 48 .class_mask = 0xff << 16,
 49 },
 50 {}
 51 };
 52 
 53 const static struct pci_dev* nvidiaDev;
 54 void __iomem* mmio; //the cpu virtual address mapped to nvidia pci address
 55 #ifdef __BEG_ENDIAN
 56 #define ioread32_native ioread32be
 57 #else
 58 #define ioread32_native ioread32
 59 #endif
 60 static inline u32 nv_rd32(void __iomem* mmio,unsigned reg) //读取温度的主函数
 61 {
 62 return readl(mmio+reg);//ioread32_native(mmio+reg);
 63 }
 64 
 65 static ssize_t getTemp(struct file* file,char __user* userbuf,size_t bytes,loff_t* off)
 66 {
 67 u32 temp = -2;
 68 
 69 if(*off)
 70 return 0;
 71 if(mmio)
 72 temp = nv_rd32(mmio,0x20400);
 73 sprintf(userbuf,"%s,temp:%d\n",nvidiaDev->driver->name,temp);
 74 *off = strlen(userbuf);
 75 return *off;
 76 }
 77 
 78 static struct file_operations my_file_ops =
 79 {
 80 .owner = THIS_MODULE,
 81 .read = getTemp,
 82 };
 83 static struct bus_type* pci_bus_types = (struct bus_type*)0xc17f4640;
 84 static int match_pci_dev_by_id(struct device* pdev,void*data)
 85 {
 86 struct pci_dev* dev = to_pci_dev(pdev);
 87 struct pci_device_id * id = data;
 88 if ((id->vendor == PCI_ANY_ID || id->vendor == dev->vendor) && \
 89 (id->device == PCI_ANY_ID || id->device == dev->device) && \
 90 (id->subvendor == PCI_ANY_ID || id->subvendor == dev->subsystem_vendor) &&
 91 (id->subdevice == PCI_ANY_ID || id->subdevice == dev->subsystem_device) && \
 92 !((id->class ^ dev->class) & id->class_mask))
 93 return 1;
 94 else
 95 return 0;
 96 
 97 }
 98 struct pci_dev *pci_get_subsyss(const struct pci_device_id* id ,struct pci_dev *from)
 99 {
100 struct device* dev_start = NULL,*dev=NULL;
101 struct pci_dev* pdev = NULL;
102 /* pci_find_subsys() can be called on the ide_setup() path,
103 * super-early in boot. But the down_read() will enable local
104 * interrupts, which can cause some machines to crash. So here we
105 * detect and flag that situation and bail out early.
106 */
107 if (unlikely(no_pci_devices()))
108 return NULL;
109 
110 printk("in pci_get_subsyss\n");
111 
112 if(from)
113 dev_start = &from->dev;
114 
115 dev = bus_find_device(pci_bus_types,dev_start,(void*)id,match_pci_dev_by_id);
116 if(dev)
117 pdev = to_pci_dev(dev);
118 if (from)
119 pci_dev_put(from);
120 printk("leaving pci_get_subsyss\n");
121 return pdev;
122 }
123 static void getNvidiaDevice(void)
124 {
125 struct pci_dev* pdev = NULL;
126 const struct pci_device_id* pid;
127 int i;
128 printk("in getNvidiaDevice\n");
129 for (i = 0; i < 2; i++)
130 {
131 
132 pid = &pciidlist[i];
133 
134 /* Loop around setting up a DRM device for each PCI device
135 * matching our ID and device class. If we had the internal
136 * function that pci_get_subsys and pci_get_class used, we'd
137 * be able to just pass pid in instead of doing a two-stage
138 * thing.
139 */
140 pdev = NULL;
141 while ((pdev = pci_get_subsyss(pid, pdev)) != NULL) {
142 if ((pdev->class & pid->class_mask) != pid->class)
143 continue;
144 /* stealth mode requires a manual probe */
145 else
146 break;
147 //drm_get_pci_dev(pdev, pid, driver);
148 }
149 if (pdev && pdev->driver && strstr(pdev->driver->name,"nouveau"))//查找nvidia显卡对应的显卡驱动,此时显卡驱动已经装载在系统,可以匹配
150 {
151 pci_dev_get(pdev); //get the pci_dev for nvidia
152 nvidiaDev = pdev;
153 return;
154 }
155 }
156 printk("leaving getNvidiaDevice\n");
157 }
158 struct device_private {
159 struct klist klist_children;
160 struct klist_node knode_parent;
161 struct klist_node knode_driver;
162 struct klist_node knode_bus;
163 void *driver_data;
164 struct device *device;
165 };
166 #ifndef NV03_PMC_BOOT_0
167 #define NV03_PMC_BOOT_0 0x00000000
168 #endif
169 static __init int in(void)
170 {
171 
172 struct proc_dir_entry *entry;
173 resource_size_t mmio_start_offs;
174 uint32_t reg0 = 0;
175 int chipset = 0;
176 entry = create_proc_entry("nvidiaTemp", 0, init_net.proc_net);
177 if (entry) {
178 entry->proc_fops = &my_file_ops;
179 }
180 getNvidiaDevice();
181 //fill nvidiaDev
182 if(nvidiaDev)
183 {
184 mmio_start_offs = pci_resource_start(nvidiaDev,0);
185 mmio = ioremap(mmio_start_offs,0x00800000);//8
186 
187 reg0 = nv_rd32(mmio, NV03_PMC_BOOT_0);
188 /* We're dealing with >=NV10 */
189 if ((reg0 & 0x0f000000) > 0)
190 {
191 /* Bit 27-20 contain the architecture in hex */
192 chipset = (reg0 & 0xff00000) >> 20;
193 /* NV04 or NV05 */
194 }
195 else if ((reg0 & 0xff00fff0) == 0x20004000)
196 {
197 if (reg0 & 0x00f00000)
198 chipset = 0x05;
199 else
200 chipset = 0x04;
201 }
202 else
203 chipset = 0xff;
204 
205 printk(KERN_INFO "nvidia local address:%lx,mapped address:%lx!,chipset:%x\n",(unsigned long)mmio_start_offs,(unsigned long)mmio,chipset);
206 }
207 printk(KERN_INFO "Initialze TEMP success!\n");
208 return 0;
209 }
210 /**
211 * This function is called when the module is unloaded.
212 *
213 */
214 static __exit void out(void)
215 {
216 remove_proc_entry("nvidiaTemp", init_net.proc_net);
217 if(nvidiaDev)
218 pci_dev_put((struct pci_dev*)nvidiaDev);
219 if(mmio)
220 iounmap(mmio);
221 printk(KERN_INFO "Remove my_seq_proc success!\n");
222 }
223 module_init(in);
224 module_exit(out);
225 /*显示如下
226 [65517.412294] in getNvidiaDevice
227 [65517.412305] in pci_get_subsyss
228 [65517.412320] leaving pci_get_subsyss
229 [65517.414604] nvidia local address:fd000000,mapped address:fb380000!,chipset:86 可以看出,机器的显卡芯片类型为86
230 [65517.414610] Initialze TEMP success!
231 root@ubuntu:/media/Kingston/nouveauTempature# modinfo b.ko
232 filename: b.ko
233 description: NVIDIA THERMAL
234 author: JIAN ZHOU
235 license: GPL
236 srcversion: 493E700B5A3FD79435A5989
237 depends: 
238 vermagic: 3.2.0-6-generic SMP mod_unload modversions 686 
239 */

 

 

在我的笔记本中,可以正常读取显卡温度

附录:一些弯路
1 版本不一致问题
刚开始时,测试的笔记本linux内核为3.2.0-6版本,而用source insight分析的代码版本为2.6.28版本,实质上nouveau显卡驱动模块在2.6.33版本中才加入到系统源码中,且在内核版本2.6.35时,里面并没有提供读取显卡温度的函数[这里很简单,只要进入linux2.6.35/driver/gpu/drm目录下,然后利用grep -R "temp.*input" .查看即可,相反3.3.2版本中提供了相应内容接口]。所以以为显卡温度的获取函数在目录linux2.6.35/driver/hwmon目录下,毕竟该目录都是硬件传感器监控信息,然后Grep –R temp_input ../hwmon找出了hwmon目录下所有有关temp1_input的文件,然后又通过grep -R “_driver)” .|grep pci找出了pci设备对应的驱动文件,如下图所示:

然后对这些文件进行分析,无法找到思路,实际想想,获取温度信息的函数归根结底还是要通过显卡驱动主题部分代码实现,而当时分析的linux2.6.35,好不容易找到了显卡驱动目录,但是就是没有与温度相关的信息文件,只能用英文关键词[英文很重要,百度更懂中文,所以查学术型东西很烂,因为中国人基本就没有好的学术论文,当然也没有丰富详细的英文资料]google,发现了网址http://lists.freedesktop.org/archives/nouveau/2009-November/004087.html,然后意识到,肯定是神奇的最新版本中[因为我的系统内核为3.2.0中有读取显卡传感器的函数]中有读取显卡传感器的函数,下载最新版本3.3.2,然后分析,果然找到了,走了好多弯路
2 关于风扇转速、电压等硬件信息无法读取信息
从上面的分析可以看出,要从内核里面读取风扇转速、电压的传感器信息,相应的内核驱动支持是必不可少的,一般情况下,成功读取比如cpu风扇转速信息需要3个条件:首先,硬件得提供实时获取cpu风扇转速的传感器和对应的存储寄存器,此时转速信息会存储在cpu对应寄存器中,这一部分大多数硬件制造商应该都提供;其次内核中必须提供读取该寄存器信息的驱动程序,可以通过一条汇编指令实现[如何外设I/O采用端口映射],亦可以通过普通的读取虚存地址实现[外设I/O采用内存映射实现],这一部分,随着内核版本的不同,有很大差别,lm_sensors中的sensors-detect命令即对各个硬件接口进行探测,查看是否有内核驱动模块能支持对应硬件传感器信息,如果无法读出,说明当前版本的内核没有相应的读取函数,此时只能查看相关硬件文档,自己实现该部分传感器信息的读取、或者自己下载别人实现的函数文件;最后使用sensors、xsensors、nvclock_gtk等用户态应用软件通过系统接口读取内核提供的来自寄存器并经过内核处理的数值,然后进行转化,以用户可以理解的方式来显示显卡温度、cpu温度等信息

  GNU
GPU英文全称Graphic Processing Unit,中文翻译为“图形处理器”。GPU是相对于CPU的一个概念,由于在现代的计算机中(特别是家用系统,游戏的发烧友)图形的处理变得越来越重要,需要一个专门的图形的核心处理器
 DRM
Direct Rendering Manager——直接渲染管理器,它通过增加直接渲染所必要的内核模块来为显卡增加3D加速功能,2D显示芯片在处理3D图像和特效时主要依赖CPU的处理能力,称为“软加速”。3D显示芯片是将三维图像和特效处理功能集中在显示芯片内,也即所谓的“硬件加速”功能

posted on 2012-04-24 10:23  周健  阅读(1553)  评论(0编辑  收藏  举报

导航