摘要:
# mian目录currentDir=$(cd $(dirname $0); pwd)echo "currentDir = ${currentDir}" source /etc/profilesource ~/.bashrcconda activate milvus_envpython --vers 阅读全文
摘要:
训练过程为什么需要 Mask 机制? 两个原因。 1. 屏蔽未来信息,防止未来帧参与训练。 2. 处理不同长度的序列,在批处理时对较短的序列进行填充(padding),并确保这些填充不会影响到模型的输出。 mask机制如何实现? 1. 屏蔽未来信息的 Mask:在自注意力层中,通过构造一个上三角矩阵 阅读全文
摘要:
import jsonimport sysimport timefrom pymilvus import connections, Collection, FieldSchema, CollectionSchema, DataType, utility, Index# 连接到 Milvusdef c 阅读全文
摘要:
进入root 用户下 yum -y install docker 安装容器 systemctl start docker 启动容器 查看命令 sudo docker ps ,如图所示 ,说明启动成功 拉取镜像 sudo docker pull tensorflow/serving:2.11.0 如图 阅读全文
摘要:
import os os.environ["CUDA_VISIBLE_DEVICES"] = "0" import tensorflow as tf from sklearn.model_selection import train_test_split from transformers impo 阅读全文
摘要:
import os os.environ["CUDA_VISIBLE_DEVICES"] = "2" import tensorflow as tf from sklearn.model_selection import train_test_split from transformers impo 阅读全文
摘要:
模型蒸馏 import pandas as pd import tensorflow as tf from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix, f1 阅读全文
摘要:
1. bert 二分类 import tensorflow as tf from sklearn.model_selection import train_test_split from transformers import BertTokenizer, TFBertModel import pa 阅读全文
摘要:
网站: https://www.huaxiaozhuan.com/ GitHub - firechecking/CleanTransformer: an implementation of transformer, bert, gpt, and diffusion models for learni 阅读全文
摘要:
import torch import torch.nn as nn import torch.nn.functional as F class MultiHeadSelfAttention(nn.Module): def __init__(self, embed_size, heads): sup 阅读全文