深度学习的始祖框架,grandfather级别的框架 —— Theano —— 示例代码学习(5)
代码1:(求雅可比矩阵, jacobian矩阵求解)
import theano
from theano import tensor
# Creating a vector
x = tensor.dvector('x')
# Creating 'y' expression
y = (2 * x ** 3)
# Computing derivative
Output, updates = theano.scan(lambda i, y, x : tensor.grad(y[i], x),\
sequences=tensor.arange(y.shape[0]),\
non_sequences=[y, x])
# Creating function
# fun = theano.function([x], Output, updates=updates)
fun = theano.function([x], Output)
# Calling function
print( fun([3,3]) )
运行结果:
代码2:(求黑森矩阵, Hession矩阵求解)
import theano
from theano import tensor
# Creating a vector
x = tensor.dvector('x')
# Creating 'y' expression
y = (2 * x ** 3)
# Calculating cost
cost = y.sum()
# Computing derivative
derivative = tensor.grad(cost, x)
output, updates = theano.scan(lambda i, derivative,x : \
tensor.grad(derivative[i], x),\
sequences=tensor.arange(derivative.shape[0]),\
non_sequences=[derivative, x])
# Creating function
# fun = theano.function([x], output, updates=updates)
fun = theano.function([x], output)
# Calling function
print( fun([3,3]) )
运行结果:
代码3:(theano自定义列表List类型)
import theano.typed_list
# Creating typedlist
f1 = theano.typed_list.TypedListType(theano.tensor.fvector)()
# Creating a vector
f2 = theano.tensor.fvector()
# Appending 'f1' and 'f2'
f3 = theano.typed_list.append(f1, f2)
# Creating function which takes two vectors 'f1' and 'f2' as input and gives 'f3' as output
fun = theano.function([f1, f2], f3)
# Calling function
print( fun([[1,2]], [2]) )
print( type(fun([[1,2]], [2])) )
运行结果:
代码4:(theano的switch选择函数)
import theano
from theano import tensor
# Creating two scalars
x, y = tensor.scalars('x', 'y')
xx, yy = tensor.vectors('xx', 'yy')
# switch expression
switch_expression = tensor.switch(tensor.gt(x, y), x, y)
switch_expression2 = tensor.switch(tensor.gt(xx, yy), xx, yy)
# Creating function
fun = theano.function([x, y], switch_expression, mode=theano.compile.mode.Mode(linker='vm'))
fun2 = theano.function([xx, yy], switch_expression2, mode=theano.compile.mode.Mode(linker='vm'))
# Calling function fun(12,11)
print( fun(12,11) )
print( fun2([2, 2, 2], [1, 1, 1]) )
print( fun2([2, 0, 2], [1, 3, 1]) )
运行结果:
代码5:
import theano
# Creating a theano variable 'x' with value 10
x = theano.shared(10, 'xxx')
#This will just print the variable x
print(x)
# Eval function
print(x.eval()) #This will print its actual value
运行结果:
代码6:(和概率推断框架pymc3联合使用)(PyMC3框架联合使用)
import theano
from theano import tensor
import pymc3 as pm
# Creating pymc3 model
model = pm.Model()
# Creating tensor variable
mu = tensor.scalar('mu')
# Log-normal distribution
distribution = pm.Normal.dist(0, 1).logp(mu)
# Creating function
fun = theano.function([mu], distribution)
# Calling function
print( fun(4) )
运行结果:
代码7:
import theano
from theano import tensor
from theano.ifelse import ifelse
# Creating variables
# Input neuron
x = tensor.vector('x')
# Weight
w = tensor.vector('w')
# Bias
b = tensor.scalar('b')
# Creating expression:
z = tensor.dot(x,w)+b
# Output neuron
o = ifelse(tensor.lt(z,0),0,1)
fun_neural_network = theano.function([x,w,b],o)
# Defining Inputs, Weights and bias
inputs = [ [0, 0], [0, 1], [1, 0], [1, 1] ]
weights = [ 1, 1]
bias = 0
# Iterate through all inputs and find outputs:
for ip in range(len(inputs)):
m = inputs[ip]
out = fun_neural_network(m,weights,bias)
print('The output for x1 = {} & x2 = {} is {}'.format(m[0],m[1],out))
运行结果:
代码8:
import theano
from theano import tensor
from theano.ifelse import ifelse
import numpy as np
from random import random
# Creating variables:
x = tensor.matrix('x') #Input neurons
w1 = theano.shared(np.array([random(),random()])) #Random generation of weights
w2 = theano.shared(np.array([random(),random()]))
w3 = theano.shared(np.array([random(),random()]))
b1 = theano.shared(1.) #Bias
b2 = theano.shared(1.)
rate_of_learning = 0.01 # Learning rate
y1 = 1/(1+tensor.exp(-tensor.dot(x,w1)-b1))
y2 = 1/(1+tensor.exp(-tensor.dot(x,w2)-b1))
x2 = tensor.stack([y1,y2],axis=1)
y3 = 1/(1+tensor.exp(-tensor.dot(x2,w3)-b2))
actual = tensor.vector('actual') #Actual output
cost = -(actual*tensor.log(y3) + (1-actual)*tensor.log(1-y3)).sum()
dervw1,dervw2,dervw3,dervb1,dervb2 = tensor.grad(cost,[w1,w2,w3,b1,b2])
# Model training
model_train = theano.function( inputs = [x, actual],\
outputs = [y3, cost],\
updates = [ [w1, w1-rate_of_learning*dervw1],\
[w2, w2-rate_of_learning*dervw2],\
[w3, w3-rate_of_learning*dervw3],\
[b1, b1-rate_of_learning*dervb1],\
[b2, b2-rate_of_learning*dervb2] ] )
inputs = [ [0, 0], [0, 1], [1, 0], [1, 1] ]
outputs = [0,1,0,1]
# Iterate through all inputs and find outputs:
cost = []
for i in range(100000):
pred, cost_iteration = model_train(inputs, outputs)
cost.append(cost_iteration)
# Output
print('The outputs of the Neural network are => ')
for i in range(len(inputs)):
print('The output for x1 = {} | x2 = {} => {}'.format(inputs[i][0],inputs[i][1],pred[i]))
运行结果:
代码9:(矩阵拼接)
import theano
from theano import tensor
# Creating two matrices
a, b = tensor.matrices('a', 'b')
# Using concatenate function
merge_c = tensor.concatenate([a, b])
# Creating function
cancat_function = theano.function([a, b], merge_c)
# Calling function
print( cancat_function([[1,2]], [[1,2], [3,4]]) )
运行结果:
本博客是博主个人学习时的一些记录,不保证是为原创,个别文章加入了转载的源地址,还有个别文章是汇总网上多份资料所成,在这之中也必有疏漏未加标注处,如有侵权请与博主联系。
如果未特殊标注则为原创,遵循 CC 4.0 BY-SA 版权协议。
posted on 2024-02-13 12:44 Angry_Panda 阅读(20) 评论(0) 编辑 收藏 举报
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· DeepSeek 开源周回顾「GitHub 热点速览」
· 记一次.NET内存居高不下排查解决与启示
· 物流快递公司核心技术能力-地址解析分单基础技术分享
· .NET 10首个预览版发布:重大改进与新特性概览!
· .NET10 - 预览版1新功能体验(一)
2023-02-13 国内计算机领域相关的SCI和EI期刊,以及好中的SCI和EI期刊(不限国内外)
2023-02-13 2022 CCF推荐会议列表(国际会议列表)
2022-02-13 强化学习中REIINFORCE算法和AC算法在算法理论和实际代码设计中的区别
2020-02-13 【转载】 ----------------- 计算经济学
2018-02-13 为Python编写一个简单的C语言扩展模块
2018-02-13 读《流感下的北京中年》所感