数据库悲观锁
悲观锁
是对数据被的修改持悲观态度(认为数据在被修改的时候一定会存在并发问题),因此在整个数据处理过程中将数据锁定。悲观锁的实现,往往依靠数据库提供的锁机制(也只有数据库层提供的锁机制才能真正保证数据访问的排他性,否则,即使在应用层中实现了加锁机制,也无法保证外部系统不会修改数据)。
使用场景举例
商品goods表中有一个字段status,status为1代表商品未被下单,status为2代表商品已经被下单,那么我们对某个商品下单时必须确保该商品status为1。假设商品的id为1。如果不采用锁,那么操作方法如下:、
//1.查询出商品信息
select status from t_goods where id=1;
//2.根据商品信息生成订单
insert into t_orders (id,goods_id) values (null,1);
//3.修改商品status为2
update t_goods set status=2;
上面这种场景在高并发访问的情况下很可能会出现问题。前面已经提到,只有当goods status为1时才能对该商品下单,上面第一步操作中,查询出来的商品status为1。但是当我们执行第三步Update操作的时候,有可能出现其他人先一步对商品下单把goods status修改为2了,但是我们并不知道数据已经被修改了,这样就可能造成同一个商品被下单2次,使得数据不一致。所以说这种方式是不安全的。
使用悲观锁来实现
在上面的场景中,商品信息从查询出来到修改,中间有一个处理订单的过程,使用悲观锁的原理就是,当我们在查询出goods信息后就把当前的数据锁定,直到我们修改完毕后再解锁。那么在这个过程中,因为goods被锁定了,就不会出现有第三者来对其进行修改了。要使用悲观锁,我们必须关闭mysql数据库的自动提交属性。
set autocommit=0;
//设置完autocommit后,我们就可以执行我们的正常业务了。具体如下:
//0.开始事务
begin;/begin work;/start transaction; (三者选一就可以)
//1.查询出商品信息
select status from t_goods where id=1 for update;
//2.根据商品信息生成订单
insert into t_orders (id,goods_id) values (null,1);
//3.修改商品status为2
update t_goods set status=2;
//4.提交事务
commit;/commit work;
注:上面的begin/commit为事务的开始和结束,因为在前一步我们关闭了mysql的autocommit,所以需要手动控制事务的提交,在这里就不细表了。
上面的第一步我们执行了一次查询操作:select status from t_goods where id=1 for update;与普通查询不一样的是,我们使用了select…for update的方式,这样就通过数据库实现了悲观锁。此时在t_goods表中,id为1的那条数据就被我们锁定了,其它的事务必须等本次事务提交之后才能执行。这样我们可以保证当前的数据不会被其它事务修改。
注:需要注意的是,在事务中,只有SELECT ... FOR UPDATE 或LOCK IN SHARE MODE 相同数据时会等待其它事务结束后才执行,一般SELECT ... 则不受此影响。拿上面的实例来说,当我执行select status from t_goods where id=1 for update;后。我在另外的事务中如果再次执行select status from t_goods where id=1 for update;则第二个事务会一直等待第一个事务的提交,此时第二个查询处于阻塞的状态,但是如果我是在第二个事务中执行select status from t_goods where id=1;则能正常查询出数据,不会受第一个事务的影响。
java 代码实现分布式数据库悲观锁
基于数据库表delta_lock
CREATE TABLE `delta_lock` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
`lock_key` varchar(128) NOT NULL COMMENT 'lockName,全表唯一',
`owner` varchar(128) DEFAULT NULL COMMENT '加锁线程',
`host_ip` varchar(128) DEFAULT NULL COMMENT '加锁主机',
`expire_ts` bigint(20) DEFAULT NULL COMMENT '过期时间',
`create_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '创建时间',
PRIMARY KEY (`id`),
UNIQUE KEY `ukey_lock_key` (`lock_key`)
) ENGINE=InnoDB AUTO_INCREMENT=412 DEFAULT CHARSET=utf8mb4
对delta_lock表的crud服务
package delta.dao.service;
import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.baomidou.mybatisplus.core.toolkit.Wrappers;
import delta.dao.entity.MLock;
import delta.dao.mapper.MLockMapper;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.dao.DuplicateKeyException;
import org.springframework.stereotype.Component;
import org.springframework.transaction.annotation.Transactional;
import javax.annotation.Resource;
import java.util.Objects;
@Component
public class LockService {
private static final Logger logger = LoggerFactory.getLogger(LockService.class);
@Resource
private MLockMapper mLockMapper;
@Transactional
public int insert(String lockKey, Long expireMill, String owner, String hostIp) {
MLock mLock = MLock.builder()
.lockKey(lockKey)
.expireTs(System.currentTimeMillis() + expireMill)
.owner(owner)
.hostIp(hostIp)
.build();
try {
return mLockMapper.insert(mLock);
} catch (DuplicateKeyException e) {
return 0;
} catch (Exception ex) {
logger.error("write db lock failed", ex);
return 0;
}
}
@Transactional
public int delete(String lockKey, String owner, String hostIp) {
QueryWrapper<MLock> query = Wrappers.query();
query.eq(MLock.LOCK_KEY_COLUMN, lockKey);
query.eq(MLock.OWNER_COLUMN, owner);
query.eq(MLock.HOST_IP_COLUMN, hostIp);
return mLockMapper.delete(query);
}
@Transactional
public int delete(String lockKey) {
QueryWrapper<MLock> query = Wrappers.query();
query.eq(MLock.LOCK_KEY_COLUMN, lockKey);
return mLockMapper.delete(query);
}
public boolean checkExpire(String lockKey) {
QueryWrapper<MLock> query = Wrappers.query();
query.eq(MLock.LOCK_KEY_COLUMN, lockKey);
MLock lock = mLockMapper.selectOne(query);
if (Objects.isNull(lock)) {
return false;
}
return System.currentTimeMillis() >= lock.getExpireTs();
}
}
封装crud操作,变成可直接操作的数据库锁Lock
package delta.dao.service;
import delta.common.utils.BeanContext;
import lombok.SneakyThrows;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.config.ConfigurableBeanFactory;
import org.springframework.context.annotation.Scope;
import org.springframework.transaction.annotation.Propagation;
import org.springframework.transaction.annotation.Transactional;
import javax.annotation.PreDestroy;
import java.net.InetAddress;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.Lock;
@Scope(ConfigurableBeanFactory.SCOPE_SINGLETON)
public class LockManager implements Lock {
private static final Logger logger = LoggerFactory.getLogger(LockManager.class);
private final LockService lockService;
private final String lockKey;
private boolean isLock = false;
private static final Long DEFAULT_LOCK_TIMEOUT = 600 * 1000L;
/**
* 最长可以持有锁的时间,单位为毫秒。
* 超时后,即使不释放锁也可能会被别的进程抢占
*/
private final Long expireMill;
private final Long sleepTime = 1000L;
public LockManager(String lockKey) {
this(lockKey, DEFAULT_LOCK_TIMEOUT);
}
public LockManager(String lockKey, Long expireMill) {
this.lockKey = lockKey;
this.expireMill = expireMill;
this.lockService = BeanContext.getBean(LockService.class);
}
@PreDestroy
public void destroy() {
unlock();
}
@Override
public void lock() {
while (true) {
try {
lockInterruptibly();
return;
} catch (InterruptedException ignore) {
}
}
}
@Override
public void lockInterruptibly() throws InterruptedException {
while (!tryLock()) {
TimeUnit.MILLISECONDS.sleep(sleepTime);
}
}
@Transactional(propagation = Propagation.REQUIRES_NEW)
@SneakyThrows
@Override
public boolean tryLock() {
String owner = Thread.currentThread().getName();
String hostAddress = InetAddress.getLocalHost().getHostAddress();
int affectRows = lockService.insert(lockKey, expireMill, owner, hostAddress);
if (affectRows > 0) {
logger.info("@@ 主机:{},线程:{},成功加锁 {}", hostAddress, owner, lockKey);
isLock = true;
return true;
} else {
// 锁超时,强制解锁,不需考虑owner
if (lockService.checkExpire(lockKey)) {
forceUnlock(lockKey);
return tryLock();
}
logger.info("@@ 主机:{},线程:{},加锁失败 {}", hostAddress, owner, lockKey);
return false;
}
}
@Override
public boolean tryLock(long time, TimeUnit unit) throws InterruptedException {
long start = System.currentTimeMillis();
long waitTime = unit.toMillis(time);
while (!tryLock()) {
if (System.currentTimeMillis() - start > waitTime) {
return false;
}
TimeUnit.MILLISECONDS.sleep(sleepTime);
}
return true;
}
@SneakyThrows
@Override
public void unlock() {
String owner = Thread.currentThread().getName();
String hostAddress = InetAddress.getLocalHost().getHostAddress();
if (!isLock) {
logger.info("@@ 主机:{},线程:{} 未获取到锁: {},无需释放", hostAddress, owner, lockKey);
return;
}
logger.info("@@ 主机:{},线程:{} 释放锁: {}", hostAddress, owner, lockKey);
lockService.delete(lockKey, owner, hostAddress);
isLock = false;
}
public void forceUnlock(String lockKey) {
logger.info("@@ 强制删除过期 lockKey:{}", lockKey);
lockService.delete(lockKey);
}
@Override
public Condition newCondition() {
throw new UnsupportedOperationException();
}
}
使用数据库锁
LockManager lockManager = new LockManager(TIMLINE_ACCESS_TOKEN_DISLOCK_PREFIX + aspID);
if(lockManager.tryLock()){
try{
}finally {
lockManager.unlock();
}
}
分布式数据库锁+一致性缓存处理
只有在获取数据的时候,判断并更新缓存
public AccessSignatureResult getTimlineSignature() throws Exception {
String aspID = this.timlinePropertiesConfig.getAspid();
AccessSignatureResult signatureResult = null;
String atRet = mTokenService.getToken(TIMLINE_ACCESS_TOKEN_PREFIX + aspID);
LockManager lockManager = new LockManager(TIMLINE_ACCESS_TOKEN_DISLOCK_PREFIX + aspID);
/**
* 无数据库缓存
*/
if (atRet == null) {
log.info("[Can Not Find AccessSignatureResult From Redis By aspID] - {}", aspID);
/**
* 获取分布式锁,如果成功,进行刷新,如果失败利用缓存中的accesstoken
*/
// if (distributedLockByRedis.tryGetDistributedLock(TIMLINE_ACCESS_TOKEN_DISLOCK_PREFIX + aspID, UUID.randomUUID().toString(), 3)) {
// RLock rLock = redisUtil.getRedissonClient(TIMLINE_ACCESS_TOKEN_DISLOCK_PREFIX + aspID).getLock(TIMLINE_ACCESS_TOKEN_DISLOCK_PREFIX + aspID);
// if(rLock.tryLock(50,3000, TimeUnit.MILLISECONDS)){
if(lockManager.tryLock()){
/**
* 获得分布式锁之后,进行刷新accessToken
*/
try {
log.info("[Get DistributedLock And Want to refresh token]");
//双重检测,此处获取锁之后,仍然进行双重检测,防止多线程,多次刷新token
if(this.doubleCheckToken(aspID)) {
log.info("[Get DistributedLock And Double check pass And Refresh AccessSignatureResult]");
signatureResult = this.refreshTimLineAccessToken();
}else {
log.info("[Double check fail And not to Refresh AccessSignatureResult]");
}
}finally {
lockManager.unlock();
}
if (signatureResult == null) {
log.error("[Get Timline AccessSignatureResult Failed]");
throw new Exception("Get Timline AccessSignatureResult Failed");
}
} else {
/**
* 等待其它线程刷新缓存
*/
log.info("Try Get DistributedLock Failed And Wait 3 times");
for (int i = 1; i <= 3; i++) {
Thread.sleep(1000);
String tmp = mTokenService.getToken(TIMLINE_ACCESS_TOKEN_PREFIX + aspID);
if (tmp != null) {
signatureResult = JSON.parseObject(tmp, AccessSignatureResult.class);
log.info("[After Wait {} Times, Reget AccessSignatureResult From Redis] - {}", i, signatureResult);
break;
}
}
}
}
/**
* 有db缓存
*/
else {
log.info("[Get AccessSignatureResult From DB]");
signatureResult = JSON.parseObject(atRet, AccessSignatureResult.class);
/**
* 判断是否15分钟内过期
*/
if (signatureResult.getExpiresSecs() - System.currentTimeMillis() <= QUARTER) {
/**
* 获取分布式锁,如果成功,进行刷新,如果失败利用缓存中的AccessSignatureResult
*/
log.info("[Your AccessSignatureResult will expire in 15 minutes]");
// if (distributedLockByRedis.tryGetDistributedLock(TIMLINE_ACCESS_TOKEN_DISLOCK_PREFIX + aspID, UUID.randomUUID().toString(), 3)) {
// RLock rLock = redisUtil.getRedissonClient(TIMLINE_ACCESS_TOKEN_DISLOCK_PREFIX + aspID).getLock(TIMLINE_ACCESS_TOKEN_DISLOCK_PREFIX + aspID);
// if(rLock.tryLock(0,3000, TimeUnit.MILLISECONDS)){
if(lockManager.tryLock()){
try {
log.info("[Get DistributedLock And Want to refresh token]");
//双重检测,此处获取锁之后,仍然进行双重检测,防止多线程,多次刷新token
if(this.doubleCheckToken(aspID))
{
log.info("[Get DistributedLock And Double check pass And Refresh AccessSignatureResult]");
AccessSignatureResult temp = this.refreshTimLineAccessToken();
if (temp != null) {
signatureResult = temp;
}
}else{
log.info("[Double check fail And not to Refresh AccessSignatureResult]");
}
}finally {
lockManager.unlock();
}
}
}
}
if (signatureResult == null) {
log.error("[Get AccessSignatureResult Error]");
throw new Exception("Get AccessSignatureResult Error");
}
return signatureResult;
}
/**
* 双重检测token是否需要更新
* @return true:需要更新token, false: 不需要更新token
*/
private boolean doubleCheckToken(String aspID)
{
boolean ret = true;
try {
String atRet = mTokenService.getToken(TIMLINE_ACCESS_TOKEN_PREFIX + aspID);
if (null != atRet) {
AccessSignatureResult signatureResult = JSON.parseObject(atRet, AccessSignatureResult.class);
if (signatureResult.getExpiresSecs() - System.currentTimeMillis() > QUARTER) {
//满足此处则不需要更细
ret = false;
}
}
}catch (Exception ex) {
log.error("double check token error:"+ex.getMessage(),ex);
}finally {
return ret;
}
}
/**
* 根据咚咚 api 获取 access token, 如果获取失败,返回null
*
* @return
* @throws Exception
*/
public AccessSignatureResult refreshTimLineAccessToken() {
String aspID = this.timlinePropertiesConfig.getAspid();
AppSigInfo info = buildAppSigInfo();
log.info("[Refresh AccessToken] - Request : {}", info);
AccessSignatureResult response = grantService.refreshAccessSignature(info);
log.info("[Refresh AccessToken] - Response : {}", response);
if (response.getCode() == TIMLINE_GRANT_SUCCESS) {
response.setExpiresSecs(System.currentTimeMillis() + response.getExpiresSecs() * 1000);
mTokenService.setToken(this.TIMLINE_ACCESS_TOKEN_PREFIX + aspID, JSON.toJSONString(response));
} else {
log.error("[Refresh Timline AccessToken Failed] - {}", response);
response = null;
}
return response;
}