当前位置: 首页>行业 >

HA高可用集群部署-环球热讯

来源: 博客园 | 时间: 2023-05-28 21:16:45 |

HA高可用集群部署高可用 ZooKeeper 集群部署zookeeper安装部署

注意:需要安装jdk,但jdk已经在第4章装过,这里直接装zookeeper

#解压并安装zookeeper[root@master ~]# lsanaconda-ks.cfgapache-hive-2.0.0-bin.tar.gzhadoop-2.7.1.tar.gzjdk-8u152-linux-x64.tar.gzmysql-community-client-5.7.18-1.el7.x86_64.rpmmysql-community-common-5.7.18-1.el7.x86_64.rpmmysql-community-devel-5.7.18-1.el7.x86_64.rpmmysql-community-libs-5.7.18-1.el7.x86_64.rpmmysql-community-server-5.7.18-1.el7.x86_64.rpmmysql-connector-java-5.1.46.jarsqoop-1.4.7.bin__hadoop-2.6.0.tar.gzzookeeper-3.4.8.tar.gz[root@master ~]# tar xf zookeeper-3.4.8.tar.gz -C /usr/local/src/[root@master ~]# cd /usr/local/src/[root@master src]# lshadoop  hive  jdk  sqoop  zookeeper-3.4.8[root@master src]# mv zookeeper-3.4.8 zookeeper[root@master src]# lshadoop  hive  jdk  sqoop  zookeeper
创建zookeeper数据目录
[root@master src]# mkdir /usr/local/src/zookeeper/data[root@master src]# mkdir /usr/local/src/zookeeper/logs
配置环境变量
[root@master src]# vi /etc/profile.d/zookeeper.shexport ZK_HOME=/usr/local/src/zookeeperexport PATH=$PATH:$ZK_HOME/bin
修改zoo.cfg配置文件
[root@master src]# cd /usr/local/src/zookeeper/conf/[root@master conf]# lsconfiguration.xsl  log4j.properties  zoo_sample.cfg[root@master conf]# cp zoo_sample.cfg zoo.cfg [root@master conf]# vi zoo.cfg #修改dataDir=/usr/local/src/zookeeper/data#增加dataLogDir=/usr/local/src/zookeeper/logsserver.1=master:2888:3888server.2=slave1:2888:3888server.3=slave2:2888:3888
创建myid配置文件
[root@master conf]# cd ..[root@master zookeeper]# cd data/[root@master data]# echo "1" > myid
分发Zookeeper集群配置文件
#发送环境变量文件到slave1,slave2[root@master data]# scp -r /etc/profile.d/zookeeper.sh slave1:/etc/profile.d/[root@master data]# scp -r /etc/profile.d/zookeeper.sh slave2:/etc/profile.d/#发送zookeeper配置文件到slave1,slave2[root@master ~]# scp -r /usr/local/src/zookeeper/ slave1:/usr/local/src/[root@master ~]# scp -r /usr/local/src/zookeeper/ slave2:/usr/local/src/
修改myid配置
#slave1[root@slave1 ~]# echo "2" >  /usr/local/src/zookeeper/data/myid #slave2[root@slave2 ~]# echo "3" >  /usr/local/src/zookeeper/data/myid #查看3个节点[root@master ~]# cat /usr/local/src/zookeeper/data/myid 1[root@slave1 ~]# cat /usr/local/src/zookeeper/data/myid 2[root@slave2 ~]# cat /usr/local/src/zookeeper/data/myid 3
修改文件所属权限
[root@master ~]# chown -R hadoop.hadoop /usr/local/src/[root@slave1 ~]# chown -R hadoop.hadoop /usr/local/src/[root@slave2 ~]# chown -R hadoop.hadoop /usr/local/src/
查看防火墙和selinux,如果没关就关掉
#以master为例,slave1,slave2同样要做[root@master ~]# systemctl disable --now firewalld[root@master ~]# systemctl status firewalld● firewalld.service - firewalld - dynamic firewall daemon   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; ve>   Active: inactive (dead)     Docs: man:firewalld(1)[root@master ~]# vi /etc/selinux/config SELINUX=disabled
切换hadoop用户,启动zookeeper
[root@master ~]# su - hadoop[root@slave1 ~]# su - hadoop[root@slave2 ~]# su - hadoop#启动zookeeper[hadoop@master ~]$ zkServer.sh startZooKeeper JMX enabled by defaultUsing config: /usr/local/src/zookeeper/bin/../conf/zoo.cfgStarting zookeeper ... STARTED[hadoop@master ~]$ zkServer.sh statusZooKeeper JMX enabled by defaultUsing config: /usr/local/src/zookeeper/bin/../conf/zoo.cfgMode: follower[hadoop@slave1 ~]$ zkServer.sh startZooKeeper JMX enabled by defaultUsing config: /usr/local/src/zookeeper/bin/../conf/zoo.cfgStarting zookeeper ... STARTED[hadoop@slave1 ~]$ zkServer.sh statusZooKeeper JMX enabled by defaultUsing config: /usr/local/src/zookeeper/bin/../conf/zoo.cfgMode: leader[hadoop@slave2 ~]$ zkServer.sh startZooKeeper JMX enabled by defaultUsing config: /usr/local/src/zookeeper/bin/../conf/zoo.cfgStarting zookeeper ... STARTED[hadoop@slave2 ~]$ zkServer.sh statusZooKeeper JMX enabled by defaultUsing config: /usr/local/src/zookeeper/bin/../conf/zoo.cfgMode: follower
查看集群
[hadoop@master ~]$ jps1522 QuorumPeerMain1579 Jps[hadoop@slave1 ~]$ jps1368 Jps1309 QuorumPeerMain[hadoop@slave2 ~]$ jps1330 QuorumPeerMain1387 Jps
Hadoop HA集群部署

注意:ssh免密登录在第4章已经配过,这里直接配HA


(资料图)

配置密钥加几条:

将masterr创建的公钥发给slave1

[hadoop@master ~]$ scp ~/.ssh/authorized_keys root@slave1:~/.ssh/root@slave1"s password: authorized_keys                                                        100%  567   672.2KB/s   00:00  

将slave1的私钥加到公钥里

[hadoop@slave1 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

将公钥发给slave2,master

[hadoop@slave1 ~]$ ssh-copy-id slave2[hadoop@slave1 ~]$ ssh-copy-id master
删除第4章安装的hadoop
#删除环境变量,三个节点都要做[root@master ~]# rm -rf /etc/profile.d/hadoop.sh[root@slave1 ~]# rm -rf /etc/profile.d/hadoop.sh[root@slave2 ~]# rm -rf /etc/profile.d/hadoop.sh#删除hadoop[root@master ~]# rm -rf /usr/local/src/hadoop/[root@slave1 ~]# rm -rf /usr/local/src/hadoop/[root@slave2 ~]# rm -rf /usr/local/src/hadoop/
配置hadoop环境变量
[root@master ~]# vi /etc/profile.d/hadoop.shexport HADOOP_HOME=/usr/local/src/hadoopexport HADOOP_PREFIX=$HADOOP_HOMEexport HADOOP_MAPRED_HOME=$HADOOP_HOMEexport HADOOP_COMMON_HOME=$HADOOP_HOMEexport HADOOP_HDFS_HOME=$HADOOP_HOMEexport YARN_HOME=$HADOOP_HOMEexport HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/nativeexport HADOOP_INSTALL=$HADOOP_HOMEexport HADOOP_OPTS="Djava.library.path=$HADOOP_INSTALL/lib:$HADOOP_COMMON_LIB_NATIVE_DIR"export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbinexport JAVA_HOME=/usr/local/src/jdkexport PATH=$PATH:$JAVA_HOME/binexport ZK_HOME=/usr/local/src/zookeeperexport PATH=$PATH:$ZK_HOME/bin
配置 hadoop-env.sh 配置文件
[root@master ~]# tar -xf hadoop-2.7.1.tar.gz -C /usr/local/src/[root@master ~]# mv /usr/local/src/hadoop-2.7.1/ /usr/local/src/hadoop[root@master ~]# cd /usr/local/src/hadoop/etc/hadoop/[root@master hadoop]# vi hadoop-env.sh #在最下面添加如下配置:export JAVA_HOME=/usr/local/src/jdk
配置 core-site.xml 配置文件
[root@master hadoop]# vi core-site.xml                         fs.defaultFS                 hdfs://mycluster                                hadoop.tmp.dir                file:/usr/local/src/hadoop/tmp                                ha.zookeeper.quorum                master:2181,slave1:2181,slave2:2181                                ha.zookeeper.session-timeout.ms                30000                ms                                fs.trash.interval                1440        
配置 hdfs-site.xml 配置文件
[root@master hadoop]# vi hdfs-site.xml                         dfs.qjournal.start-segment.timeout.ms                60000                                dfs.nameservices                mycluster                        dfs.ha.namenodes.mycluster                master,slave1                                dfs.namenode.rpc-address.mycluster.master                master:8020                                dfs.namenode.rpc-address.mycluster.slave1                slave1:8020                                dfs.namenode.http-address.mycluster.master                master:50070                                dfs.namenode.http-address.mycluster.slave1                slave1:50070                                dfs.namenode.shared.edits.dir                qjournal://master:8485;slave1:8485;slave2:8485/mycluster                                dfs.client.failover.proxy.provider.mycluster                                      org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider                                dfs.ha.fencing.methods                                sshfence                shell(/bin/true)                                                dfs.permissions.enabled                false                 dfs.support.append                true                                dfs.ha.fencing.ssh.private-key-files                /root/.ssh/id_rsa                                dfs.replication                2                                dfs.namenode.name.dir                /usr/local/src/hadoop/tmp/hdfs/nn                                dfs.datanode.data.dir                /usr/local/src/hadoop/tmp/hdfs/dn                                dfs.journalnode.edits.dir                /usr/local/src/hadoop/tmp/hdfs/jn                                dfs.ha.automatic-failover.enabled                true                                dfs.webhdfs.enabled                true                                dfs.ha.fencing.ssh.connect-timeout                30000                                ha.failover-controller.cli-check.rpc-timeout.ms                60000        
配置mapred-site.xml配置文件
[root@master ~]# cd /usr/local/src/hadoop/etc/hadoop/[root@master hadoop]# cp mapred-site.xml.template mapred-site.xml                        mapreduce.framework.name                yarn                                mapreduce.jobhistory.address                master:10020                                mapreduce.jobhistory.webapp.address                master:19888        
配置yarn-site.xml配置文件
                        yarn.resourcemanager.ha.enabled                true                                yarn.resourcemanager.cluster-id                yrc                                yarn.resourcemanager.ha.rm-ids                rm1,rm2                                yarn.resourcemanager.hostname.rm1                master                                yarn.resourcemanager.hostname.rm2                slave1                                yarn.resourcemanager.zk-address                master:2181,slave1:2181,slave2:2181                                yarn.nodemanager.aux-services                mapreduce_shuffle                                yarn.log-aggregation-enable                true                                yarn.log-aggregation.retain-seconds                86400                                yarn.resourcemanager.recovery.enabled                true                                yarn.resourcemanager.store.class                org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore        
配置slaves配置文件
[root@master hadoop]# vi slaves#删除localhost添加以下masterslave1slave2
创建数据存放目录
#namenode、datanode、journalnode 等存放数据的公共目录为/usr/local/src/hadoop/tmp[root@master hadoop]# mkdir -p /usr/local/src/hadoop//tmp/hdfs/{nn,dn,jn}[root@master hadoop]# mkdir -p /usr/local/src/hadoop/tmp/logs
分发文件到其他节点
#分发环境变量文件[root@master hadoop]# scp -r /etc/profile.d/hadoop.sh slave1:/etc/profile.d/hadoop.sh                                                              100%  601   496.6KB/s   00:00    [root@master hadoop]# scp -r /etc/profile.d/hadoop.sh slave2:/etc/profile.d/hadoop.sh                                                              100%  601   314.7KB/s   00:00  #分发hadoop配置目录[root@master hadoop]# scp -r /usr/local/src/hadoop/ slave1:/usr/local/src/[root@master hadoop]# scp -r /usr/local/src/hadoop/ slave2:/usr/local/src/
修改目录所有者和所有者组
[root@master ~]# chown -R hadoop.hadoop /usr/local/src/[root@slave1 ~]# chown -R hadoop.hadoop /usr/local/src/[root@slave2 ~]# chown -R hadoop.hadoop /usr/local/src/
生效环境变量
#在切换hadoop用户时会自动导入,为了以防万一,还是手动source一下[root@master ~]# source /etc/profile.d/hadoop.sh [root@slave1 ~]# source /etc/profile.d/hadoop.sh [root@slave2 ~]# source /etc/profile.d/hadoop.sh 
HA高可用集群启动HA的启动启动journalnode守护进程
#切换hadoop用户[hadoop@master ~]$ hadoop-daemons.sh start journalnodemaster: starting journalnode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-journalnode-master.outslave1: starting journalnode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-journalnode-slave1.outslave2: starting journalnode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-journalnode-slave2.out
初始化namenode
[hadoop@master ~]$ hdfs namenode -format............23/05/28 13:58:27 INFO namenode.FSImage: Allocated new BlockPoolId: BP-793703415-192.168.88.10-168525350764723/05/28 13:58:27 INFO common.Storage: Storage directory /usr/local/src/hadoop/tmp/hdfs/nn has been successfully formatted.23/05/28 13:58:28 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 023/05/28 13:58:28 INFO util.ExitUtil: Exiting with status 023/05/28 13:58:28 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************SHUTDOWN_MSG: Shutting down NameNode at master/192.168.88.10************************************************************/
注册ZNode
#要先启动zookeeper不然会报错[hadoop@master ~]$ zkServer.sh start[hadoop@slave1 ~]$ zkServer.sh start[hadoop@slave2 ~]$ zkServer.sh start[hadoop@master ~]$ hdfs zkfc -formatZK......23/05/28 14:01:08 INFO zookeeper.ClientCnxn: Opening socket connection to server slave2/192.168.88.30:2181. Will not attempt to authenticate using SASL (unknown error)23/05/28 14:01:08 INFO zookeeper.ClientCnxn: Socket connection established to slave2/192.168.88.30:2181, initiating session23/05/28 14:01:08 INFO zookeeper.ClientCnxn: Session establishment complete on server slave2/192.168.88.30:2181, sessionid = 0x38860f220b90000, negotiated timeout = 3000023/05/28 14:01:08 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/mycluster in ZK.23/05/28 14:01:08 INFO ha.ActiveStandbyElector: Session connected.23/05/28 14:01:08 INFO zookeeper.ZooKeeper: Session: 0x38860f220b90000 closed23/05/28 14:01:08 INFO zookeeper.ClientCnxn: EventThread shut down
启动hdfs
[hadoop@master ~]$ start-all.sh 
同步master数据
#复制 namenode 元数据到其它节点(在 master 节点执行)[hadoop@master ~]$ scp -r /usr/local/src/hadoop/tmp/hdfs/nn/* slave1:/usr/local/src/hadoop/tmp/hdfs/nn/VERSION                                                                100%  204   189.8KB/s   00:00    seen_txid                                                              100%    2     1.3KB/s   00:00    fsimage_0000000000000000000.md5                                        100%   62    38.1KB/s   00:00    fsimage_0000000000000000000                                            100%  353   378.0KB/s   00:00    edits_inprogress_0000000000000000001                                   100% 1024KB   5.0MB/s   00:00    in_use.lock                                                            100%   11  6.4KB/s   00:00    [hadoop@master ~]$ scp -r /usr/local/src/hadoop/tmp/hdfs/nn/* slave2:/usr/local/src/hadoop/tmp/hdfs/nn/VERSION                                                                100%  204   294.1KB/s   00:00    seen_txid                                                              100%    2     2.2KB/s   00:00    fsimage_0000000000000000000.md5                                        100%   62    65.8KB/s   00:00    fsimage_0000000000000000000                                            100%  353   554.6KB/s   00:00    edits_inprogress_0000000000000000001                                   100% 1024KB   6.7MB/s   00:00    in_use.lock                                                            100%   11     8.9KB/s   00:00   
在slave1上启动resourcemanager和namenode进程
[hadoop@slave1 ~]$ yarn-daemons.sh start resourcemanager[hadoop@slave1 ~]$ hadoop-daemon.sh start namenode[hadoop@slave1 ~]$ jps1489 JournalNode1841 DFSZKFailoverController1922 NodeManager2658 NameNode2738 Jps1702 DataNode2441 ResourceManager1551 QuorumPeerMain
启动MapReduce任务历史服务器
[hadoop@master ~]$ yarn-daemon.sh start proxyserverstarting proxyserver, logging to /usr/local/src/hadoop/logs/yarn-hadoop-proxyserver-master.out[hadoop@master ~]$ mr-jobhistory-daemon.sh start historyserverstarting historyserver, logging to /usr/local/src/hadoop/logs/mapred-hadoop-historyserver-master.out
查看端口和进程
[hadoop@master ~]$ jps3297 JobHistoryServer2260 DataNode2564 DFSZKFailoverController2788 NodeManager2678 ResourceManager2122 NameNode3371 Jps1727 JournalNode1919 QuorumPeerMain[hadoop@slave1 ~]$ jps1489 JournalNode1841 DFSZKFailoverController1922 NodeManager2658 NameNode2738 Jps1702 DataNode2441 ResourceManager1551 QuorumPeerMain[hadoop@slave2 ~]$ jps1792 NodeManager1577 QuorumPeerMain2282 Jps1515 JournalNode1647 DataNode
查看网页显示

master:50070

slave1:50070

master:8088

HA的测试创建一个测试文件
[hadoop@master ~]$ vi rainmom.txtHello WorldHello Hadoop
在hdfs创建文件夹
[hadoop@master ~]$ hadoop fs -mkdir /input
将rainmom.txt传输到input上
[hadoop@master ~]$ hadoop fs -put ~/rainmom.txt /input
进入到jar包测试文件目录下,测试mapreduce
[hadoop@master ~]$ cd /usr/local/src/hadoop/share/hadoop/mapreduce/[hadoop@master mapreduce]$ hadoop jar hadoop-mapreduce-examples-2.7.1.jar wordcount /input/rainmom.txt /output.....23/05/28 14:35:37 INFO mapreduce.Job: Running job: job_1685253795384_000123/05/28 14:35:48 INFO mapreduce.Job: Job job_1685253795384_0001 running in uber mode : false23/05/28 14:35:48 INFO mapreduce.Job:  map 0% reduce 0%23/05/28 14:35:57 INFO mapreduce.Job:  map 100% reduce 0%23/05/28 14:36:09 INFO mapreduce.Job:  map 100% reduce 100%23/05/28 14:36:10 INFO mapreduce.Job: Job job_1685253795384_0001 completed successfully....
查看hdfs下的传输结果
[hadoop@master ~]$ hadoop fs -ls -R /output-rw-r--r--   2 hadoop supergroup          0 2023-05-28 14:36 /output/_SUCCESS-rw-r--r--   2 hadoop supergroup         25 2023-05-28 14:36 /output/part-r-00000
查看文件测试的结果
[hadoop@master ~]$ hadoop fs -cat /output/part-r-00000Hadoop1Hello2World1
高可用性验证自动切换服务状态
#格式:hdfs haadmin -failover --forcefence --forceactive 主 备[hadoop@master ~]$ hdfs haadmin -failover --forcefence --forceactive slave1 master#这里注意一点,执行这条命令,会出现:forcefence and forceactive flagsnot supported with auto-failover enabled.的提示,这句话表示,配置了自动切换之后,就不能进行手动切换了,故此次切换失败, 该意思是在配置故障自动切换(dfs.ha.automatic-failover.enabled=true)之后,无法手动进行,可将该参数更改为false(不需要重启进程)后,重新执行该命令即可。# dfs.ha.automatic-failover.enabled参数需要在hdfs-site.xml或者core-site.xml中修改[hadoop@master ~]$ vi /usr/local/src/hadoop/etc/hadoop/hdfs-site.xml                         dfs.ha.automatic-failover.enabled                true                #查看状态[hadoop@master ~]$ hdfs haadmin -getServiceState slave1standby[hadoop@master ~]$ hdfs haadmin -getServiceState masteractive
手动切换服务状态
#在 maste 停止并启动 namenode[hadoop@master ~]$ hadoop-daemon.sh stop namenodestopping namenode#查看状态[hadoop@master ~]$ hdfs haadmin -getServiceState master23/05/28 14:53:55 INFO ipc.Client: Retrying connect to server: master/192.168.88.10:8020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)Operation failed: Call From master/192.168.88.10 to master:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused[hadoop@master ~]$ hdfs haadmin -getServiceState slave1active#重新启动[hadoop@master ~]$ hadoop-daemon.sh start namenodestarting namenode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-namenode-master.out#再次查看状态[hadoop@master ~]$ hdfs haadmin -getServiceState slave1active[hadoop@master ~]$ hdfs haadmin -getServiceState masterstandby
查看web服务端

master:50070

slave1:50070

关键词:

 

热文推荐

HA高可用集群部署-环球热讯

HA高可用集群部署 高可用ZooKeeper集群部署 zookeeper安装部署注意:需要安装jdk,但jdk已经在第4章装

2023-05-28

焦点速递!杭州人这两天要待在“蒸笼”里了………体感温度直冲38℃!

△点击上图有福利昨天杭州可真有点闷热啊,有点像是梅雨来临前的天气。虽然太阳大多数时候都是躲在云层后面

2023-05-28

凡人修仙传人界篇魔修怎么玩 新手魔修攻略大全

凡人轻松的上手魔修。凡人修仙传人界篇魔修攻略:一、魔修特点唤出真魔驱使鬼灵对敌人附加咒言、业火造成

2023-05-28

淅川牛尾山(关于淅川牛尾山介绍) 微速讯

来为大家讲解以上的问题。淅川牛尾山,川牛尾山介绍这个很多人还不知道,我们一起来看看!1、淅川牛尾山位于

2023-05-28

白帆布鞋怎么洗才能洗白(白帆布鞋怎么洗才能洗白)

来为大家解答以上问题。白帆布鞋怎么洗才能洗白,白帆布鞋怎么洗才能洗白这个很多人还不清楚,现在一起跟着

2023-05-28

国邦医药(605507.SH)股东浙民投恒久及丝路基金合计减持2.24%股份

智通财经APP讯,国邦医药(605507 SH)发布公告,公司于5月26日收到股东浙民投恒久、丝路基金的《关于减持股

2023-05-28

世界热头条丨宇宙蛋

1、《宇宙蛋》是2016年中国人口出版社出版的图书,作者是冰波著。2、。文章到此就分享结束,希望对大家有所

2023-05-28

四川长虹为C919研制蓄电池组

5月28日电,5月28日上午,国产大飞机C919全球首次商业载客飞行,从上海虹桥机场飞往北京首都机场。据四川长

2023-05-28

WinXP问世22年后 黑客发布离线激活算号器:强大程度被低估了-世界热推荐

快科技5月28日讯,日前,黑客Neo-Desktop发布了名为xp_activate32 exe的工具。别看它只有18KB,外界验证后

2023-05-28

夏之锁中文硬盘版_夏之锁

1、《夏之锁》游戏有多个结局,分别有哪些结局?结局剧情是什么?接下来带来“不服就肛”总结分享的全结局

2023-05-28

教师生二胎产假多少天-世界新要闻

具体规定如下,结合实际情况进行计算。单独二胎产假规定一、单独二胎女职员享有正常产假98天,其中产前可以

2023-05-28

【世界聚看点】印度一官员为找手机抽空210万升水库:最终结果神仙难救

印度一官员为找手机抽空210万升水库:最终结果神仙难救

2023-05-28

陈妍希男友_关于陈妍希男友的介绍

1、陈晓,男,汉族,1987年7月5日出生于安徽合肥,中国内地男演员,毕业于中央戏剧学院表演系本科班。2、20

2023-05-28

做好“关键民生小事” 实现“绿色发展大事”——我市探索生活垃圾分类“郑州模式”

郑州首届城市生活垃圾分类宣传周活动启动生活垃圾分类,是群众关心的一项“关键民生小事”,是城乡人居环境

2023-05-28

【天天新要闻】大家都往睢宁这两个地方赶,所为何事?

昨天(5月26日)明明还没到周末睢宁的两个镇却出现了从全县各处而来的游客因为对于睢宁人来说昨天可是个特

2023-05-28

每日讯息!记者:贝里奇参加了训练,教练组评估其身体状况后决定他是否出场

中超联赛第10轮,天津津门虎将迎战大连人队,据记者顾颖消息,天津津门虎外援贝里奇参加了训练,教练组评估

2023-05-28

豪门最后的挣扎?向太全家上阵开直播,网友不屑:这波韭菜不好割

豪门最后的挣扎?向太全家上阵开直播,网友不屑:这波韭菜不好割,向太,向佐,直播,韭菜,香草,蔬菜,娱乐圈,叶

2023-05-28

焦点资讯:二十年后再相会简谱教唱_二十年后再相会简谱

1、二十年后再相会-张迈佟铁鑫-谷建芬作品选111123|05556553|03

2023-05-28

全国人均GDP第一城,又变了 当前热议

四线小城,何以力压北上广深?

2023-05-28

万达紧急回应! 全球快看点

“万达销售业绩造假”、“160亿销售20个万达广场”“将被华润集团收购”……近日,网上有关万达的传闻四起

2023-05-28

资讯

因地制宜放“大招” 多地市密集出台稳经济措施

当前,受新冠肺炎疫情等因素影响,我国经济发展仍面临严峻挑战。5月25日,国务院召开全国稳住经济大盘会议,要求把稳增长放在更突出位置。5

2022-06-20     
北京推出14条秋游文化线路

金秋时节,北京市文化和旅游局以赏银杏品文化为主题,推出14条“叶落的季节——漫步北京赏银杏品文化主题线路”,邀市民和游客以步行、骑行

2021-10-27     
基因编辑发力 培育高质量人源化供体猪

此次人体试验,仅仅验证了基因编辑猪克服异种器官移植的超急性排斥反应,还需解决延迟性排斥反应、消耗性血栓等问题。但通过这次试验,能更

2021-10-27     
中国经济高质量发展步伐稳健 长期向好基本面未变

在全球疫情走势和经济走势趋于复杂的背景下,中国经济巨轮将驶向何方,举世关注。2020年10月26日至29日,党的十九届五中全会在京举行,明确

2021-10-27     
南美解放者杯决赛允许近4.5万观众入场

南美洲足联主席多明格斯25日与今年解放者杯决赛对阵的两支俱乐部负责人会晤,宣布决赛现场观众人数增加到球场容量的75%,即近4 5万人。今年

2021-10-27     
22年从警生涯 面对荣誉他说不要给我报功

9月24日,时任安徽省安庆市公安局迎江分局刑警大队大要案中队中队长周磊因在工作中激烈搏斗引发心源性猝死,倒在了工作岗位上,经医院抢救

2021-10-27