redis+xxl-job初步设计点赞功能
1 |
一般情况下点赞业务涉及以下下几个方面:
1.我们肯定要知道一个题目被多少人点过赞,还要知道,每个人他点赞了哪些题目。
2.点赞的业务特性,频繁。用户一多,时时刻刻都在进行点赞,收藏等等处理,如果说我们采取传统的数据库的模式啊,这个交互量是非常大的,很难去抗住这个并发问题,所以我们采取 redis 的方式来做。(当业务并发量低,以及代码设计的逻辑使得能够应对需求,也可以用mysql做,根据具体业务场景选择最佳的解决方案,本篇主要为学习使用)
3.查询的数据交互,可以通过redis 直接来做,持久化的数据,通过数据库查询即可,这个数据如何去同步到数据库,我们就采取的定时任务 xxl-job 定期来刷数据。
点赞设计(本文只展示核心代码,数据流的转换便不再展示)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 | private static final String SUBJECT_LIKED_KEY = "subject.liked" ; private static final String SUBJECT_LIKED_COUNT_KEY = "subject.liked.count" ; private static final String SUBJECT_LIKED_DETAIL_KEY = "subject.liked.detail" ; @Override public void add(SubjectLikedBO subjectLikedBO) { Long subjectId = subjectLikedBO.getSubjectId(); String likeUserId = subjectLikedBO.getLikeUserId(); Integer status = subjectLikedBO.getStatus(); String hashKey = buildSubjectLikedKey(subjectId.toString(), likeUserId); redisUtil.putHash(SUBJECT_LIKED_KEY, hashKey, status); String detailKey = SUBJECT_LIKED_DETAIL_KEY + "." + subjectId + "." + likeUserId; String countKey = SUBJECT_LIKED_COUNT_KEY + "." + subjectId; if (SubjectLikedStatusEnum.LIKED.getCode() == status) { redisUtil.increment(countKey, 1 ); redisUtil.set(detailKey, "1" ); } else { Integer count = redisUtil.getInt(countKey); if (Objects.isNull(count) || count <= 0 ) { return ; } redisUtil.increment(countKey, - 1 ); redisUtil.del(detailKey); } } private String buildSubjectLikedKey(String subjectId, String userId) { return subjectId + ":" + userId; } @Getter public enum SubjectLikedStatusEnum { LIKED( 1 , "点赞" ), UN_LIKED( 0 , "取消点赞" ); public int code; public String desc; SubjectLikedStatusEnum( int code, String desc) { this .code = code; this .desc = desc; } public static SubjectLikedStatusEnum getByCode( int codeVal) { for (SubjectLikedStatusEnum resultCodeEnum : SubjectLikedStatusEnum.values()) { if (resultCodeEnum.code == codeVal) { return resultCodeEnum; } } return null ; } } |
自定义封装redisutil类
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 | @Component @Slf4j public class RedisUtil { @Resource private RedisTemplate redisTemplate; private static final String CACHE_KEY_SEPARATOR = "." ; /** * 构建缓存key */ public String buildKey(String... strObjs) { return Stream.of(strObjs).collect(Collectors.joining(CACHE_KEY_SEPARATOR)); } /** * 是否存在key */ public boolean exist(String key) { return redisTemplate.hasKey(key); } /** * 删除key */ public boolean del(String key) { return redisTemplate.delete(key); } /** * set(不带过期) */ public void set(String key, String value) { redisTemplate.opsForValue().set(key, value); } /** * set(带过期) */ public boolean setNx(String key, String value, Long time, TimeUnit timeUnit) { return redisTemplate.opsForValue().setIfAbsent(key, value, time, timeUnit); } /** * 获取string类型缓存 */ public String get(String key) { return (String) redisTemplate.opsForValue().get(key); } public Boolean zAdd(String key, String value, Long score) { return redisTemplate.opsForZSet().add(key, value, Double.valueOf(String.valueOf(score))); } public Long countZset(String key) { return redisTemplate.opsForZSet().size(key); } public Set<String> rangeZset(String key, long start, long end) { return redisTemplate.opsForZSet().range(key, start, end); } public Long removeZset(String key, Object value) { return redisTemplate.opsForZSet().remove(key, value); } public void removeZsetList(String key, Set<String> value) { value.stream().forEach((val) -> redisTemplate.opsForZSet().remove(key, val)); } public Double score(String key, Object value) { return redisTemplate.opsForZSet().score(key, value); } public Set<String> rangeByScore(String key, long start, long end) { return redisTemplate.opsForZSet().rangeByScore(key, Double.valueOf(String.valueOf(start)), Double.valueOf(String.valueOf(end))); } public Object addScore(String key, Object obj, double score) { return redisTemplate.opsForZSet().incrementScore(key, obj, score); } public Object rank(String key, Object obj) { return redisTemplate.opsForZSet().rank(key, obj); } public Set<ZSetOperations.TypedTuple<String>> rankWithScore(String key, long start, long end) { Set<ZSetOperations.TypedTuple<String>> set = redisTemplate.opsForZSet().reverseRangeWithScores(key, start, end); return set; } public void putHash(String key, String hashKey, Object hashVal) { redisTemplate.opsForHash().put(key, hashKey, hashVal); } public Integer getInt(String key) { return (Integer) redisTemplate.opsForValue().get(key); } public void increment(String key, Integer count) { redisTemplate.opsForValue().increment(key, count); } public Map<Object, Object> getHashAndDelete(String key) { Map<Object, Object> map = new HashMap<>(); Cursor<Map.Entry<Object, Object>> cursor = redisTemplate.opsForHash().scan(key, ScanOptions.NONE); while (cursor.hasNext()) { Map.Entry<Object, Object> entry = cursor.next(); Object hashKey = entry.getKey(); Object value = entry.getValue(); map.put(hashKey, value); redisTemplate.opsForHash().delete(key, hashKey); } return map; } } |
subject.liked.detail用户点赞状态
subject.liked.count 某题目被点赞的数量
subject.liked点赞人的数据
点赞设计已完成,那么我们如何通过xxl-job统一调度任务?
官方文档:https://www.xuxueli.com/xxl-job/#%E3%80%8A%E5%88%86%E5%B8%83%E5%BC%8F%E4%BB%BB%E5%8A%A1%E8%B0%83%E5%BA%A6%E5%B9%B3%E5%8F%B0XXL-JOB%E3%80%8B
本文演示本地使用xxl-job,文末附带docker部署xxl-job
修改admin下本地数据库连接,
doc/db下执行sql
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 | # # XXL-JOB v2. 4.2 -SNAPSHOT # Copyright (c) 2015 -present, xuxueli. CREATE database if NOT EXISTS `xxl_job` default character set utf8mb4 collate utf8mb4_unicode_ci; use `xxl_job`; SET NAMES utf8mb4; CREATE TABLE `xxl_job_info` ( `id` int ( 11 ) NOT NULL AUTO_INCREMENT, `job_group` int ( 11 ) NOT NULL COMMENT '执行器主键ID' , `job_desc` varchar( 255 ) NOT NULL, `add_time` datetime DEFAULT NULL, `update_time` datetime DEFAULT NULL, `author` varchar( 64 ) DEFAULT NULL COMMENT '作者' , `alarm_email` varchar( 255 ) DEFAULT NULL COMMENT '报警邮件' , `schedule_type` varchar( 50 ) NOT NULL DEFAULT 'NONE' COMMENT '调度类型' , `schedule_conf` varchar( 128 ) DEFAULT NULL COMMENT '调度配置,值含义取决于调度类型' , `misfire_strategy` varchar( 50 ) NOT NULL DEFAULT 'DO_NOTHING' COMMENT '调度过期策略' , `executor_route_strategy` varchar( 50 ) DEFAULT NULL COMMENT '执行器路由策略' , `executor_handler` varchar( 255 ) DEFAULT NULL COMMENT '执行器任务handler' , `executor_param` varchar( 512 ) DEFAULT NULL COMMENT '执行器任务参数' , `executor_block_strategy` varchar( 50 ) DEFAULT NULL COMMENT '阻塞处理策略' , `executor_timeout` int ( 11 ) NOT NULL DEFAULT '0' COMMENT '任务执行超时时间,单位秒' , `executor_fail_retry_count` int ( 11 ) NOT NULL DEFAULT '0' COMMENT '失败重试次数' , `glue_type` varchar( 50 ) NOT NULL COMMENT 'GLUE类型' , `glue_source` mediumtext COMMENT 'GLUE源代码' , `glue_remark` varchar( 128 ) DEFAULT NULL COMMENT 'GLUE备注' , `glue_updatetime` datetime DEFAULT NULL COMMENT 'GLUE更新时间' , `child_jobid` varchar( 255 ) DEFAULT NULL COMMENT '子任务ID,多个逗号分隔' , `trigger_status` tinyint( 4 ) NOT NULL DEFAULT '0' COMMENT '调度状态:0-停止,1-运行' , `trigger_last_time` bigint( 13 ) NOT NULL DEFAULT '0' COMMENT '上次调度时间' , `trigger_next_time` bigint( 13 ) NOT NULL DEFAULT '0' COMMENT '下次调度时间' , PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4; CREATE TABLE `xxl_job_log` ( `id` bigint( 20 ) NOT NULL AUTO_INCREMENT, `job_group` int ( 11 ) NOT NULL COMMENT '执行器主键ID' , `job_id` int ( 11 ) NOT NULL COMMENT '任务,主键ID' , `executor_address` varchar( 255 ) DEFAULT NULL COMMENT '执行器地址,本次执行的地址' , `executor_handler` varchar( 255 ) DEFAULT NULL COMMENT '执行器任务handler' , `executor_param` varchar( 512 ) DEFAULT NULL COMMENT '执行器任务参数' , `executor_sharding_param` varchar( 20 ) DEFAULT NULL COMMENT '执行器任务分片参数,格式如 1/2' , `executor_fail_retry_count` int ( 11 ) NOT NULL DEFAULT '0' COMMENT '失败重试次数' , `trigger_time` datetime DEFAULT NULL COMMENT '调度-时间' , `trigger_code` int ( 11 ) NOT NULL COMMENT '调度-结果' , `trigger_msg` text COMMENT '调度-日志' , `handle_time` datetime DEFAULT NULL COMMENT '执行-时间' , `handle_code` int ( 11 ) NOT NULL COMMENT '执行-状态' , `handle_msg` text COMMENT '执行-日志' , `alarm_status` tinyint( 4 ) NOT NULL DEFAULT '0' COMMENT '告警状态:0-默认、1-无需告警、2-告警成功、3-告警失败' , PRIMARY KEY (`id`), KEY `I_trigger_time` (`trigger_time`), KEY `I_handle_code` (`handle_code`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4; CREATE TABLE `xxl_job_log_report` ( `id` int ( 11 ) NOT NULL AUTO_INCREMENT, `trigger_day` datetime DEFAULT NULL COMMENT '调度-时间' , `running_count` int ( 11 ) NOT NULL DEFAULT '0' COMMENT '运行中-日志数量' , `suc_count` int ( 11 ) NOT NULL DEFAULT '0' COMMENT '执行成功-日志数量' , `fail_count` int ( 11 ) NOT NULL DEFAULT '0' COMMENT '执行失败-日志数量' , `update_time` datetime DEFAULT NULL, PRIMARY KEY (`id`), UNIQUE KEY `i_trigger_day` (`trigger_day`) USING BTREE ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4; CREATE TABLE `xxl_job_logglue` ( `id` int ( 11 ) NOT NULL AUTO_INCREMENT, `job_id` int ( 11 ) NOT NULL COMMENT '任务,主键ID' , `glue_type` varchar( 50 ) DEFAULT NULL COMMENT 'GLUE类型' , `glue_source` mediumtext COMMENT 'GLUE源代码' , `glue_remark` varchar( 128 ) NOT NULL COMMENT 'GLUE备注' , `add_time` datetime DEFAULT NULL, `update_time` datetime DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4; CREATE TABLE `xxl_job_registry` ( `id` int ( 11 ) NOT NULL AUTO_INCREMENT, `registry_group` varchar( 50 ) NOT NULL, `registry_key` varchar( 255 ) NOT NULL, `registry_value` varchar( 255 ) NOT NULL, `update_time` datetime DEFAULT NULL, PRIMARY KEY (`id`), KEY `i_g_k_v` (`registry_group`,`registry_key`,`registry_value`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4; CREATE TABLE `xxl_job_group` ( `id` int ( 11 ) NOT NULL AUTO_INCREMENT, `app_name` varchar( 64 ) NOT NULL COMMENT '执行器AppName' , `title` varchar( 12 ) NOT NULL COMMENT '执行器名称' , `address_type` tinyint( 4 ) NOT NULL DEFAULT '0' COMMENT '执行器地址类型:0=自动注册、1=手动录入' , `address_list` text COMMENT '执行器地址列表,多地址逗号分隔' , `update_time` datetime DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4; CREATE TABLE `xxl_job_user` ( `id` int ( 11 ) NOT NULL AUTO_INCREMENT, `username` varchar( 50 ) NOT NULL COMMENT '账号' , `password` varchar( 50 ) NOT NULL COMMENT '密码' , `role` tinyint( 4 ) NOT NULL COMMENT '角色:0-普通用户、1-管理员' , `permission` varchar( 255 ) DEFAULT NULL COMMENT '权限:执行器ID列表,多个逗号分割' , PRIMARY KEY (`id`), UNIQUE KEY `i_username` (`username`) USING BTREE ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4; CREATE TABLE `xxl_job_lock` ( `lock_name` varchar( 50 ) NOT NULL COMMENT '锁名称' , PRIMARY KEY (`lock_name`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4; INSERT INTO `xxl_job_group`(`id`, `app_name`, `title`, `address_type`, `address_list`, `update_time`) VALUES ( 1 , 'xxl-job-executor-sample' , '示例执行器' , 0 , NULL, '2018-11-03 22:21:31' ); INSERT INTO `xxl_job_info`(`id`, `job_group`, `job_desc`, `add_time`, `update_time`, `author`, `alarm_email`, `schedule_type`, `schedule_conf`, `misfire_strategy`, `executor_route_strategy`, `executor_handler`, `executor_param`, `executor_block_strategy`, `executor_timeout`, `executor_fail_retry_count`, `glue_type`, `glue_source`, `glue_remark`, `glue_updatetime`, `child_jobid`) VALUES ( 1 , 1 , '测试任务1' , '2018-11-03 22:21:31' , '2018-11-03 22:21:31' , 'XXL' , '' , 'CRON' , '0 0 0 * * ? *' , 'DO_NOTHING' , 'FIRST' , 'demoJobHandler' , '' , 'SERIAL_EXECUTION' , 0 , 0 , 'BEAN' , '' , 'GLUE代码初始化' , '2018-11-03 22:21:31' , '' ); INSERT INTO `xxl_job_user`(`id`, `username`, `password`, `role`, `permission`) VALUES ( 1 , 'admin' , 'e10adc3949ba59abbe56e057f20f883e' , 1 , NULL); INSERT INTO `xxl_job_lock` ( `lock_name`) VALUES ( 'schedule_lock' ); commit; |
启动服务
http://127.0.0.1:8080/xxl-job-admin访问
账户admin
密码123456
初始信息
新增执行器
新增任务
复制过来,原路径
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | @Override public void syncLiked() { Map<Object, Object> subjectLikedMap = redisUtil.getHashAndDelete(SUBJECT_LIKED_KEY); if (log.isInfoEnabled()) { log.info( "syncLiked.subjectLikedMap:{}" , JSON.toJSONString(subjectLikedMap)); } if (MapUtils.isEmpty(subjectLikedMap)) { return ; } //批量同步到数据库 List<SubjectLiked> subjectLikedList = new LinkedList<>(); subjectLikedMap.forEach((key, val) -> { SubjectLiked subjectLiked = new SubjectLiked(); String[] keyArr = key.toString().split( ":" ); String subjectId = keyArr[ 0 ]; String likedUser = keyArr[ 1 ]; subjectLiked.setSubjectId(Long.valueOf(subjectId)); subjectLiked.setLikeUserId(likedUser); subjectLiked.setStatus(Integer.valueOf(val.toString())); subjectLiked.setIsDeleted(IsDeletedFlagEnum.UN_DELETED.getCode()); subjectLikedList.add(subjectLiked); }); if (log.isInfoEnabled()) { log.info( "syncLiked.subjectLikedList:{}" , JSON.toJSONString(subjectLikedList)); } subjectLikedService.batchInsertOrUpdate(subjectLikedList); } |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | <insert id= "batchInsertOrUpdate" > INSERT INTO subject_liked (subject_id, like_user_id, status, created_by, created_time, update_by, update_time, is_deleted) VALUES <foreach collection= "entities" item= "item" separator= "," > (#{item.subjectId}, #{item.likeUserId}, #{item.status}, #{item.createdBy}, #{item.createdTime}, #{item.updateBy}, #{item.updateTime}, #{item.isDeleted}) </foreach> ON DUPLICATE KEY UPDATE status = VALUES(status), created_by = VALUES(created_by), created_time = VALUES(created_time), update_by = VALUES(update_by), update_time = VALUES(update_time), is_deleted = VALUES(is_deleted) </insert> |
主要通过下面方法获取subject.like键然后传入
subjectLikedMap,最后传入
subjectLikedList完成数据传输问题
这里有个个人问题便是,我们使用了mybatis拦截器填充默认值,在获取loginid时由于线程问题会抛出一个异常,所有我们需要trycatch一下
1 2 3 4 5 6 7 8 | //String loginId = LoginUtil.getLoginId(); String loginId; try { loginId = Optional.ofNullable(LoginUtil.getLoginId()).orElse( "账户匿名" ); } catch (Exception e) { log.error( "Failed to get login ID from LoginUtil" , e); return invocation.proceed(); } |
这样,我们的调度便完成了
docker部署xxl-job
1 2 3 4 5 6 7 8 9 10 11 12 13 | docker search xxl-job docker pull xuxueli/xxl-job-admin: 2.4 . 0 docker run -d \ -p 8088 : 8088 \ -v /tool/xxl-job/logs:/data/applogs \ -v /tool/xxl-job/application.properties:/xxl-job/xxl-job-admin/src/main/resources/application.properties \ -e PARAMS="--server.port= 8088 \ --spring.datasource.url=jdbc:mysql: //117.72.14.166:3306/xxl_job?useUnicode=true&characterEncoding=UTF-8&autoReconnect=true&serverTimezone=Asia/Shanghai \ --spring.datasource.username=root \ --spring.datasource.password=Wing1Q2W#E" \ --name xxl-job-admin \ xuxueli/xxl-job-admin: 2.4 . 0 |
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 阿里最新开源QwQ-32B,效果媲美deepseek-r1满血版,部署成本又又又降低了!
· SQL Server 2025 AI相关能力初探
· AI编程工具终极对决:字节Trae VS Cursor,谁才是开发者新宠?
· 开源Multi-agent AI智能体框架aevatar.ai,欢迎大家贡献代码
· Manus重磅发布:全球首款通用AI代理技术深度解析与实战指南
2023-08-04 天道观后感2023/08/04
2023-08-04 每周总结2023/8/4平滑处理