BurningBright

  • Home

  • Tags

  • Categories

  • Archives

  • Search

KafkaListener annotation

Posted on 2019-12-02 | Edited on 2019-12-15 | In db

Listener container factory

Define a config loader function fist

1
2
3
4
5
6
7
8
private Map<String, Object> consumerConfigs(String groupId) {
Map<String, Object> consumerConfig = new HashMap<>();
initKafkaClientConfiguration(consumerConfig);
consumerConfig.put("group.id", groupId);
consumerConfig.put(..., ...);
...
return consumerConfig;
}

Create factory bean

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
@Bean
public ConcurrentKafkaListenerContainerFactory<Integer, String> kafkaListenerContainerFactoryOne() {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConcurrency(10);
factory.getContainerProperties().setPollTimeout(3000);
factory.setConsumerFactory(consumerFactory("group-one"));
return factory;
}

@Bean
public ConcurrentKafkaListenerContainerFactory<Integer, String> kafkaListenerContainerFactoryTwo() {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConcurrency(10);
factory.getContainerProperties().setPollTimeout(3000);
factory.setConsumerFactory(consumerFactory("group-two"));
return factory;
}

Point factory in @KafkaListener

1
2
3
4
5
@KafkaListener(topics = "Topic-1" , containerFactory = "kafkaListenerContainerFactoryOne")
...

@KafkaListener(topics = "Topic-2" , containerFactory = "kafkaListenerContainerFactoryTwo")
...

Get header body by annotation

1
2
3
4
5
6
7
8
9
10
11
12
13
14
@KafkaListener(id = "anno", topics = "topic-3")
public void annoListener(@Payload String data,
@Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) Integer key,
@Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition,
@Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
@Header(KafkaHeaders.RECEIVED_TIMESTAMP) long ts) {
log.info("topic.quick.anno receive : \n"+
"data : " + data + "\n" +
"key : " + key + "\n" +
"partitionId : " + partition + "\n" +
"topic : " + topic + "\n" +
"timestamp : " + ts + "\n"
);
}

https://blog.csdn.net/my_momo_csdn/article/details/89366205
https://www.jianshu.com/p/a64defb44a23

Introducing Serverless

Posted on 2019-11-07 | Edited on 2019-11-30 | In blog

第1章无服务器简介

在本章中,我们首先要上一堂历史课,以了解导致我们走向无服务器的原因。在这种情况下,我们将描述什么是无服务器。最后,我们将通过总结为什么无服务器既是云自然增长的一部分,又是我们如何处理应用交付方式的障碍而总结一下。

搭建舞台

为了将像Serverless这样的技术置于适当的环境中,我们必须首先概述其演进路径中的步骤。

云的诞生

让我们回到2006年。目前还没有人拥有iPhone,Ruby on Rails是一个炙手可热的新编程环境,并且Twitter也在启动。但是,与该报告更紧密的关系是,许多人将其服务器端应用程序托管在自己拥有的机架式数据中心中。

2006年8月,发生了一些事情,从根本上改变了这种模式。亚马逊的新IT部门亚马逊网络服务(AWS)宣布推出弹性计算云(EC2)。

EC2是众多基础架构即服务(IaaS)产品的首批产品之一。IaaS允许公司租用计算能力(即托管运行面向Internet的服务器应用程序的主机),而不用购买自己的计算机。它还允许他们及时配置主机,从请求计算机到可用性的延迟大约为几分钟。

EC2的五个主要优势是:

  • 降低人工成本
    在使用基础架构即服务之前,公司需要雇用特定的技术运营人员,他们将在数据中心工作并管理其物理服务器。这意味着一切,从电源和网络到机架和安装,再到修复机器的物理问题(如内存不足)到设置操作系统(OS)。有了IaaS,所有这些都消失了,而是由IaaS服务提供商(在EC2中为AWS)负责。

  • 在管理自己的物理服务器时,公司会遇到由于意外事件(如硬件故障)而导致的问题。由于硬件问题通常很少发生,并且可能需要很长时间才能解决,因此这会导致长度波动很大的停机时间。使用IaaS,客户在发生硬件故障时仍然有一些工作要做,而不再需要知道如何修复硬件。取而代之的是,客户只需在几分钟之内请求一个可用的新机器实例,然后重新安装该应用程序,就可以避免此类问题的发生。

  • 降低基础设施成本
    在许多情况下,考虑到电源,网络等因素,连接的EC2实例的成本要比运行自己的硬件便宜。当您只想运行几天或几周而不是几个月的主机时,这尤其有效或数年的时间。同样,按小时租用主机而不是直接购买主机会导致不同的会计处理:EC2机器是运营费用(Opex),而不是物理机器的资本费用(Capex),通常允许更有利的会计灵活性。

  • 缩放比例
    考虑到IaaS带来的扩展优势,基础架构成本将大幅下降。借助IaaS,公司在扩展其运行的服务器的数量和类型方面具有更大的灵活性。您不再需要预先购买10台高端服务器,因为您认为几个月后可能会需要它们。相反,您可以从一个或两个低功耗,便宜的实例开始,然后随时间上下扩展实例的数量和类型,而不会产生任何负面成本影响。

  • 交货时间
    在自托管服务器的糟糕年代,要为新应用程序购买和配置服务器可能要花费数月的时间。如果您想在几周内尝试一下,那太糟糕了。使用IaaS,交货时间从数月缩短至数分钟。在精益创业的思想鼓舞下,这已经进入了快速产品试验的时代。

基础设施外包

使用IaaS是一种我们可以定义为基础设施外包的技术。当我们开发和操作软件时,我们可以通过两种方式分解工作的需求:针对我们需求的需求,以及与其他以类似方式工作的团队和组织相同的需求。我们可以将第二组需求定义为基础设施,范围从实物商品(例如运行机器的电力)一直到常见的应用程序功能(如用户身份验证)。

基础设施外包通常可以由服务提供商或供应商提供。例如,电力由电力供应商提供,并且网络由互联网服务提供商(ISP)提供。卖方能够通过两种策略来获利地提供这种服务:我们现在描述的经济和技术策略。

规模经济

规模经济的思想至少几乎部分支持了基础设施外包的每种形式-由于可以利用效率,总体上多次做同一件事比独立进行这些事情总和要便宜。

例如,AWS可以以比小型公司更低的价格购买相同规格的服务器,因为AWS所购买的服务器是数千台而不是单独购买。同样,与拥有少数机器的公司相比,AWS的每台服务器硬件支持成本要低得多。

技术改进

基础设施外包通常也部分归因于技术创新。就EC2而言,变化是硬件虚拟化。

在IaaS出现之前,一些IT供应商已经开始允许公司将物理服务器作为主机租用,通常在一个月之内。尽管有些公司使用了这项服务,但按小时租用主机的替代方案更具吸引力。但是,只有将物理服务器细分为许多小型的,快速启动和关闭的虚拟机(VM)后,这才真正可行。一旦有可能,IaaS就诞生了。

共同利益

基础设施外包通常会呼应IaaS的五个好处:

  • 降低了劳动力成本-减少了人员,减少了执行基础设施工作所需的时间

  • 降低风险-更少的专业知识专家和更实时的运营支持能力

  • 降低资源成本-相同功能的成本更低

  • 扩展的灵活性更高-可以访问更多资源和不同类型的相似资源,然后进行处理,而不会造成重大损失或浪费

  • 交货时间更短-从概念到生产可用性的上市时间缩短

当然,基础设施外包也有其弊端和局限性,我们将在本报告的后面部分进行介绍。

云长大

IaaS与存储(例如AWS Simple Storage Service(S3))一起是云的第一个关键要素之一。AWS是一个先行者,现在仍然是领先的云提供商,但是还有许多其他供应商,从大型厂商(如Microsoft和Google)到尚未大型的厂商(如DigitalOcean)。

当我们谈论“云”时,我们通常指的是公共云,即由供应商提供的,与您自己的公司分离并托管在供应商自己的数据中心中的基础结构服务的集合。但是,我们还看到了云产品的相关增长,公司可以使用诸如Open Stack之类的工具在自己的数据中心中使用云产品。这样的自托管系统通常被称为私有云,并利用自己的硬件和物理空间的行为称为内部部署(或只是的预置型。)

公共云的下一个发展是平台即服务(PaaS)。Heroku是最受欢迎的PaaS提供商之一。PaaS在IaaS之上分层,将操作系统(OS)添加到要外包的基础架构中。使用PaaS,您仅部署应用程序,该平台负责操作系统安装,补丁程序升级,系统级监视,服务发现等。

PaaS在Cloud Foundry中还具有流行的自托管开源变体。由于PaaS位于现有虚拟化解决方案之上,因此您可以在本地或较低级别的IaaS公共云服务上托管“私有PaaS”。同时使用公共和私有云系统通常被称为混合云 ; 能够在两种环境中实施一个PaaS可能是一项有用的技术。

在虚拟机之上使用PaaS的另一种方法是使用容器。在过去的几年中,Docker变得越来越受欢迎,因为它可以从操作系统本身的本质上更加清晰地描述应用程序的系统要求。有基于云的服务代表团队托管和管理/协调容器,通常称为容器即服务(CaaS)。一个公共云示例是Google的Container Engine。一些自托管的CaaS是Kubernetes和Mesos,您可以将其私有运行,也可以像PaaS一样在公共IaaS服务之上运行。

与IaaS一样,由供应商提供的PaaS和CaaS都是基础设施外包的进一步形式。它们与IaaS的主要不同之处在于,它们进一步提高了抽象级别,使我们可以将更多技术转让给他人。因此,PaaS和CaaS的优势与我们前面列出的五种优势相同。

更具体地说,我们可以将所有这三种(IaaS,PaaS,CaaS)归为“ 计算即服务”;换句话说,可以在其中运行我们自己的专用软件的不同类型的通用环境。我们将很快再次使用此术语。

进入无服务器阶段右移

因此,距云诞生已有十多年了。进行此阐述的主要原因是,本报告的主题“无服务器”被最简单地描述为云计算的下一个发展,以及另一种形式的基础设施外包。它具有我们已经看到的相同的五个总体好处,并且能够通过规模经济和技术进步提供这些好处。但是除此之外,无服务器是什么?

定义无服务器

一旦我们对Serverless有了任何详细的了解,就会遇到第一个令人困惑的点:Serverless实际上涵盖了一系列技术。我们将这些想法分为两个领域:后端即服务(BaaS)和功能即服务(FaaS)。

后端即服务

BaaS就是要用现成的服务替换我们编写和/或管理的服务器端组件。它在概念上更接近软件即服务(SaaS),而不是虚拟实例和容器之类的东西。SaaS通常是将业务流程外包(例如人力资源或销售工具,或者在技术方面,是Github之类的产品),而BaaS则将我们的应用程序分解为较小的部分,并完全通过外部产品来实现其中的一些部分。

BaaS服务是领域通用的远程组件(即不是进程内库),我们可以将其集成到我们的产品中,其中API是典型的集成范例。

BaaS在开发移动应用程序或单页Web应用程序的团队中特别受欢迎。许多这样的团队能够极大地依赖第三方服务来执行原本需要自己完成的任务。让我们看几个例子。

首先,我们拥有Google Firebase之类的服务(在关闭之前,解析是)。Firebase是由供应商(在本例中为Google)完全管理的数据库产品,可以直接从移动或Web应用程序使用,而无需我们自己的中间应用程序服务器。这代表了BaaS的一个方面:代表我们管理数据组件的服务。

BaaS服务还使我们能够依赖其他人已经实现的应用程序逻辑。一个很好的例子是身份验证-许多应用程序实现自己的代码以执行注册,登录,密码管理等,但是在许多应用程序中,此代码通常非常相似。团队和企业之间的这种重复已经成熟,可以提取到外部服务中了,而这正是Auth0和Amazon的Cognito之类的产品的目标。这两个产品都允许移动应用程序和Web应用程序具有完整的身份验证和用户管理功能,而无需开发团队编写或管理任何代码来实现这些功能。

随着移动应用程序开发的兴起,后端即服务这一术语变得特别流行。实际上,该术语有时称为移动后端即服务(MBaaS)。但是,在我们的应用程序开发中使用完全外部管理的产品的关键思想并不是移动开发乃至整个前端开发所独有的。例如,我们可能会停止在EC2机器上管理自己的MySQL数据库服务器,而改用Amazon的RDS服务,或者可能会用Kinesis替换我们的自管式Kafka消息总线安装。其他数据基础结构服务包括文件系统/对象存储和数据仓库,而更多面向逻辑的示例包括语音分析以及我们前面提到的身份验证产品,也可以在服务器端组件中使用它们。这些服务中的许多服务都可以被认为是无服务器服务,但并不是全部—我们将在第5章中定义我们认为有区别的无服务器服务。

充当服务/无服务器计算

无服务器的另一半是功能即服务(FaaS)。FaaS是“计算即服务”的另一种形式,这是一种通用环境,我们可以在其中运行软件,如前所述。实际上,有些人(尤其是AWS)将FaaS称为无服务器计算。来自AWS的Lambda是当前使用最广泛的FaaS实施。

FaaS是一种构建和部署服务器端软件的新方法,其重点是部署单个功能或操作。FaaS是有关Serverless的众多话题的来源。实际上,许多人认为Serverless 是 FaaS,但他们并未完全了解。

在传统上部署服务器端软件时,我们从主机实例开始,通常是虚拟机(VM)实例或容器(请参见图1-1)。然后,我们在主机中部署我们的应用程序。如果我们的主机是VM或容器,则我们的应用程序是操作系统进程。通常,我们的应用程序包含用于几种不同但相关的操作的代码。例如,Web服务可以允许资源的检索和更新。

图1-1。传统服务器端软件部署

FaaS更改了这种部署模型(请参见图1-2)。我们从模型中剥离了主机实例和应用程序过程。相反,我们只专注于表达应用程序逻辑的单个操作或函数。我们将这些功能分别上传到供应商提供的FaaS平台。

图1-2。FaaS软件部署

但是,这些功能在服务器进程中并不是一直处于活动状态,而是闲置直到需要像在传统系统中一样运行之前(图1-3)。而是将FaaS平台配置为侦听每个操作的特定事件。发生该事件时,供应商平台会实例化Lambda函数,然后使用触发事件对其进行调用。

图1-3。FaaS功能生命周期

一旦函数执行完毕,FaaS平台就可以随意拆除它。另外,作为一种优化,它可以将功能保留一小段时间,直到有另一个事件要处理为止。

FaaS本质上是一种事件驱动的方法。FaaS供应商除了提供托管和执行代码的平台外,还集成了各种同步和异步事件源。同步源的一个示例是HTTP API网关。异步源的一个示例是托管消息总线,对象存储或类似于(cron)的计划事件。

AWS Lambda于2014年秋季推出,从那时起,它的成熟度和使用率都在增长。虽然Lambda的某些用法很少使用,每天仅执行几次,但有些公司每天使用Lambda处理数十亿个事件。在撰写本文时,Lambda已与15种以上不同类型的事件源集成在一起,使其可用于多种不同的应用程序。

除AWS Lambda之外,还有Microsoft,IBM,Google和Auth0等较小的提供商提供的其他几种商业FaaS产品。就像我们之前讨论的其他各种“计算即服务”平台(IaaS,PaaS,CaaS)一样,也有一些开源项目,您可以在自己的硬件或公共云上运行。这个私有的FaaS空间目前很忙,没有明确的负责人,并且在撰写本文时,许多选项还处于开发初期。示例包括Galactic Fog,IronFunctions,Fission(使用Kubernetes)以及IBM自己的OpenWhisk。

无服务器的共同主题

从表面上看,BaaS和FaaS完全不同:第一个是完全外包应用程序的各个元素,第二个是用于运行自己的代码的新托管环境。那么,为什么我们将它们归为Serverless的一部分呢?

关键是既不需要您管理自己的服务器主机或服务器进程。使用完全无服务器的应用程序,您不再需要将架构的任何部分视为在主机上运行的资源。您所有的逻辑(无论您是自己编写代码还是与第三方服务集成)都在完全弹性的操作环境中运行。您的状态也以类似的弹性形式存储。无服务器并不意味着服务器已经消失,这意味着您不再需要担心它们。

因为这个关键主题,巴斯和FAAS有着一些共同的优点和局限性,我们在章节看3和4。无服务器方法还有其他一些区别,这也是FaaS和BaaS的共同点,我们将在第5章中介绍。

颠簸的进化

我们在序言中提到Serverless是一种演进。其原因是,在过去的十年中,我们将更多关于应用程序和环境的共同点转移到了外包的商品服务上。我们看到了无服务器的趋势,即将主机管理,操作系统管理,资源分配,扩展甚至应用逻辑的整个组件外包,并考虑这些因素。经济上和运营上都有自然的进步。

但是,在应用程序体系结构方面,Serverless发生了很大的变化。到目前为止,大多数云服务都没有从根本上改变我们设计应用程序的方式。例如,当使用诸如Docker之类的工具时,我们在应用程序周围放置了一个更薄的“盒子”,但它仍然是盒子,并且我们的逻辑体系结构没有明显改变。在云中托管我们自己的MySQL实例时,我们仍然需要考虑处理负载的虚拟机的功能,并且我们还需要考虑故障转移。

这种情况随着Serverless的改变而改变,并且不是逐渐的,而是震动。无服务器FaaS通过基本的事件驱动模型,更细化的部署形式以及需要在FaaS组件之外保持状态的方式来驱动非常不同类型的应用程序体系结构(我们将在后面详细介绍)。无服务器BaaS使我们无需编写整个逻辑组件,但需要我们将应用程序与供应商提供的特定接口和模型集成在一起。

那么,如果无服务器应用程序如此不同,它会是什么样呢?这就是我们接下来将在第2章中探讨的内容。

https://www.oreilly.com/library/view/what-is-serverless/9781491984178/ch01.html

cmake tutorial

Posted on 2019-10-19 | Edited on 2020-02-27 | In c++

Demo

1
2
3
.
|-- CMakeLists.txt
`-- main.c

main.c

1
2
3
4
5
6
7
8
#include <stdio.h>
#include "testFunc.h"

int main(void)
{
printf("Hello World\n");
return 0;
}

CMakeLists.txt

1
2
3
4
5
cmake_minimum_required (VERSION 2.8)

project (demo)

add_executable(main main.c)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@8bdf5f3b23af demo]# cmake .
-- The C compiler identification is GNU 8.2.1
-- The CXX compiler identification is GNU 8.2.1
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Configuring done
-- Generating done
-- Build files have been written to: /root/labc/demo

command generated MakeFile, and some auto generate file.

1
2
3
4
5
6
7
.
|-- CMakeCache.txt
|-- CMakeFiles
|-- CMakeLists.txt
|-- Makefile
|-- cmake_install.cmake
`-- main.c

1
2
3
4
5
[root@8bdf5f3b23af demo]# make
Scanning dependencies of target main
[ 50%] Building C object CMakeFiles/main.dir/main.c.o
[100%] Linking C executable main
[100%] Built target main

make and execute

1
2
[root@8bdf5f3b23af demo]# ./main
Hello World

Multi-source filie under one directory

1
2
3
4
5
.
|-- CMakeLists.txt
|-- main.c
|-- testFunc.c
`-- testFunc.h

testFunc.c

1
2
3
4
5
6
7
8
9
10
11
/*
* testFunc.c
*/

#include <stdio.h>
#include "testFunc.h"

void func(int data)
{
printf("data is %d\n", data);
}

testFunc.h

1
2
3
4
5
6
7
8
9
10
/*
* testFunc.h
*/

#ifndef _TEST_FUNC_H_
#define _TEST_FUNC_H_

void func(int data);

#endif

main.c

1
2
3
4
5
6
7
8
#include <stdio.h>
#include "testFunc.h"

int main(void)
{
func(100);
return 0;
}

CMakeLists.txt

1
2
3
4
5
cmake_minimum_required (VERSION 2.8)

project (demo)

add_executable(main main.c testFunc.c)

add testFunc.c to executable param

make and execute

1
2
[root@8bdf5f3b23af multsource]# ./main
data is 100

if we have a lot of source file, add_executable is troublesome.
aux_source_directory can take files under directory as a source.

add another function

1
2
3
4
5
6
7
8
.
|-- CMakeLists.txt
|-- main
|-- main.c
|-- testFunc.c
|-- testFunc.h
|-- testFunc1.c
`-- testFunc1.h

modify CMakeLists

1
2
3
4
5
6
7
cmake_minimum_required (VERSION 2.8)

project (demo)

aux_source_directory(. SRC_LIST)

add_executable(main ${SRC_LIST})

make and execute

1
2
3
[root@8bdf5f3b23af multsource1]# ./main
data is 100
data is 200

Multi-source filie under multi-directory

1
2
3
4
5
6
7
8
9
10
.
|-- CMakeLists.txt
|-- main
|-- main.c
|-- test_func
| |-- testFunc.c
| `-- testFunc.h
`-- test_func1
|-- testFunc1.c
`-- testFunc1.h

group function by two directory test_func test_func1

modify CMakeLists.txt

1
2
3
4
5
6
7
8
9
10
cmake_minimum_required (VERSION 2.8)

project (demo)

include_directories (test_func test_func1)

aux_source_directory (test_func SRC_LIST)
aux_source_directory (test_func1 SRC_LIST1)

add_executable (main main.c ${SRC_LIST} ${SRC_LIST1})

result is same as above.

Formal structure

1
2
3
4
5
6
7
8
9
10
11
12
13
.
|-- CMakeLists.txt
|-- bin
|-- build
|-- include
| |-- testFunc.h
| `-- testFunc1.h
|-- main
`-- src
|-- CMakeLists.txt
|-- main.c
|-- testFunc.c
`-- testFunc1.c
  1. put source file to src
  2. put head file to include
  3. build used to store middle file
  4. bin used to store executable file

outside CMakeLists.txt

1
2
3
4
5
cmake_minimum_required (VERSION 2.8)

project (demo)

add_subdirectory (src)

inside CMakeLists.txt

1
2
3
4
5
6
7
aux_source_directory (. SRC_LIST)

include_directories (../include)

add_executable (main ${SRC_LIST})

set (EXECUTABLE_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/bin)

  • EXECUTABLE_OUTPUT_PATH : executable file path
  • PROJECT_SOURCE_DIR : project root

result is same as above.

Dynamic and static libraries

1
2
3
4
5
6
7
8
.
|-- CMakeLists.txt
|-- build
|-- lib
`-- lib_testFunc
|-- CMakeLists.txt
|-- testFunc.c
`-- testFunc.h

outside CMakeLists.txt

1
2
3
4
5
cmake_minimum_required (VERSION 2.8)

project (demo)

add_subdirectory (lib_testFunc)

inside CMakeLists.txt

1
2
3
4
5
6
7
8
9
aux_source_directory (. SRC_LIST)

add_library (testFunc_shared SHARED ${SRC_LIST})
add_library (testFunc_static STATIC ${SRC_LIST})

set_target_properties (testFunc_shared PROPERTIES OUTPUT_NAME "testFunc")
set_target_properties (testFunc_static PROPERTIES OUTPUT_NAME "testFunc")

set (LIBRARY_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/lib)

  • add_library: generate dynamic or static libraries, (1. libname; 2. dynamic or static, static in default; 3. source file path)
  • set_target_properties: setting output name, can set other options like lib version
  • LIBRARY_OUTPUT_PATH: default path of library file

in build run cmake . and make

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
.
|-- CMakeLists.txt
|-- build
| |-- CMakeCache.txt
| |-- CMakeFiles
| |-- Makefile
| |-- cmake_install.cmake
| `-- lib_testFunc
|-- lib
| |-- libtestFunc.a
| `-- libtestFunc.so
`-- lib_testFunc
|-- CMakeLists.txt
|-- testFunc.c
`-- testFunc.h

Link by libraries

1
2
3
4
5
6
7
8
9
10
11
12
13
14
.
|-- CMakeLists.txt
|-- bin
|-- build
|-- lib
| |-- libtestFunc.a
| `-- libtestFunc.so
|-- lib_testFunc
| |-- CMakeLists.txt
| |-- testFunc.c
| `-- testFunc.h
`-- src
|-- CMakeLists.txt
`-- main.c

outside CMakeLists.txt

1
2
3
4
5
6
7
cmake_minimum_required (VERSION 2.8)

project (demo)

add_subdirectory (lib_testFunc)

add_subdirectory (src)

inside CMakeLists.txt

1
2
3
4
5
6
7
8
9
10
11
12
aux_source_directory (. SRC_LIST)

# find testFunc.h
include_directories (../lib_testFunc)

link_directories (${PROJECT_SOURCE_DIR}/lib)

add_executable (main ${SRC_LIST})

target_link_libraries (main testFunc)

set (EXECUTABLE_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/bin)

  • link_directories : Add non-standard shared library search path
  • target_link_libraries : Link the target file with the library file

Compile options

1
2
3
4
5
.
|-- CMakeLists.txt
|-- bin
|-- build
`-- main.cpp

main.cpp

1
2
3
4
5
6
7
8
#include <iostream>

int main(void)
{
auto data = 100;
std::cout << "data: " << data << "\n";
return 0;
}

CMakeLists.txt

1
2
3
4
5
6
7
8
9
cmake_minimum_required (VERSION 2.8)

project (demo)

set (EXECUTABLE_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/bin)

add_compile_options(-std=c++11 -Wall)

add_executable(main main.cpp)

Compile control

1
2
3
4
5
6
7
8
.
|-- CMakeLists.txt
|-- bin
|-- build
`-- src
|-- CMakeLists.txt
|-- main1.c
`-- main2.c

outside CMakeLists.txt

1
2
3
4
5
6
7
8
9
cmake_minimum_required(VERSION 2.8)

project(demo)

option(MYDEBUG "enable debug compilation" OFF)

set (EXECUTABLE_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/bin)

add_subdirectory(src)

add an option MYDEBUG

inside CMakeLists.txt

1
2
3
4
5
6
7
8
9
cmake_minimum_required (VERSION 2.8)

add_executable(main1 main1.c)

if (MYDEBUG)
add_executable(main2 main2.c)
else()
message(STATUS "Currently is not in debug mode")
endif()

user MYDEBUG to decide whether or not compile main2
cmake .. -DMYDEBUG=ON && make

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
.
|-- CMakeLists.txt
|-- bin
| |-- main1
| `-- main2
|-- build
| |-- CMakeCache.txt
| |-- CMakeFiles
| |-- Makefile
| |-- cmake_install.cmake
| `-- src
`-- src
|-- CMakeLists.txt
|-- main1.c
`-- main2.c

https://blog.csdn.net/whahu1989/article/details/82078563

delete docker image failed

Posted on 2019-10-16 | Edited on 2019-11-30 | In linux

problem

docker images -a

1
2
3
4
5
6
7
8
9
10
11
[root@CN-BJI-D-de9f66 ~]# docker images -a
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 72dc10d3deba 22 hours ago 230MB
tdtest latest e3ad759bb993 22 hours ago 230MB
<none> <none> 820bc1802af7 22 hours ago 230MB
...
<none> <none> ac8cea40eaa0 22 hours ago 223MB
centos latest 0f3e07c0138f 2 weeks ago 220MB
redis latest 01a52b3b5cd1 2 weeks ago 98.2MB
ubuntu latest 2ca708c1c9cc 3 weeks ago 64.2MB
redis alpine ed7d2ff5a623 8 weeks ago 29.3MB

when put command docker rmi 820bc1802af7, <none>:<none> image delete fail.

1
Error response from daemon: conflict: unable to delete 820bc1802af7 (cannot be forced) - image has dependent child images

this is cause by has other images depend on this image 820bc1802af7.
use docker image inspect, find current image’s child image, repository tag name.

1
2
[root@CN-BJI-D-de9f66 ~]# docker image inspect --format='{{.RepoTags}} {{.Id}} {{.Parent}}' $(docker image ls -q --filter since=820bc1802af7)
[tdtest:latest] sha256:e3ad759bb99351a4951df11113e0ad3914a26d55e99e5243e13ebac19373efae sha256:8abd1595d4dfc922b81825940c0f6d73034eeb10b3e2b12a78b3d0e33d78588e

mechanism

kind one

<none>:<none> images which be related to the image be deleted will be deleted as will.
this kind of <none>:<none> is middle image, it won’t occupy space.

kind two

make a docker image docker build -t hello-world ./

1
2
FROM centos:latest
RUN echo 'hello world'

when centos have a new release, build hello-world image again, it will depend on the latest centos.
the hello-world depend on older centos, will become <none>:<none> dangling image.
this kind of <none>:<none> occupy space, it was originally tag image.

all docker file store at /var/lib/docker/graph in default, docker be called graph layer database.
use command docker rmi $(docker images -f "dangling=true" -q) to delete wandering image layer.

other command

1
2
3
4
5
6
7
8
# stop all container
docker ps -a | grep "Exited" | awk '{print $1 }'|xargs docker stop

# delete all container
docker ps -a | grep "Exited" | awk '{print $1 }'|xargs docker rm

# delete all none container
docker images|grep none|awk '{print $3 }'|xargs docker rmi

http://www.ibloger.net/article/3217.html
https://segmentfault.com/a/1190000011153919
https://blog.csdn.net/sin_geek/article/details/86736417

change repository in centos 8

Posted on 2019-10-16 | Edited on 2019-11-30 | In linux

CentOS-Base.repo

modify /etc/yum.repos.d/CentOS-Base.repo

1
2
3
4
5
6
7
8
[BaseOS]
name=CentOS-$releasever - Base
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=BaseOS&infra=$infra
#baseurl=http://mirror.centos.org/$contentdir/$releasever/BaseOS/$basearch/os/
baseurl=https://mirrors.aliyun.com/centos/$releasever/BaseOS/$basearch/os/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial

CentOS-AppStream.repo

modify /etc/yum.repos.d/CentOS-AppStream.repo

1
2
3
4
5
6
7
8
[AppStream]
name=CentOS-$releasever - AppStream
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=AppStream&infra=$infra
#baseurl=http://mirror.centos.org/$contentdir/$releasever/AppStream/$basearch/os/
baseurl=https://mirrors.aliyun.com/centos/$releasever/AppStream/$basearch/os/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial

other

other file not avaliable in default, can change to ali repository as well.
https://mirrors.aliyun.com/centos/8/extras/x86_64/os/

reload matadata
yum makecache

when modifying the source on a docker image, use the following description in the Dockerfile to build

1
2
3
4
5
6
7
RUN mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo_bak
COPY CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo

RUN mv /etc/yum.repos.d/CentOS-AppStream.repo /etc/yum.repos.d/CentOS-AppStream.repo_bak
COPY CentOS-AppStream.repo /etc/yum.repos.d/CentOS-AppStream.repo

RUN yum makecache

http://www.voycn.com/article/centos8-peizhialiyuan-zuixiaohuaanzhuang

TDengine in docker

Posted on 2019-10-15 | In linux

prepare

prepare base image

1
docker image pull centos

prepare source code

1
2
git clone git@github.com:taosdata/TDengine.git
cd TDengine

or download release version package and unpackage

prepare Dockerfile file

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
# Compile image
FROM centos as compile

WORKDIR /root

COPY src/ ./src/
COPY deps/ ./deps/
COPY packaging/ ./packaging/
COPY CMakeLists.txt ./

RUN yum update -y

RUN yum group install -y "Development Tools"

Run yum install -y cmake

WORKDIR /root/build
RUN cmake .. && cmake --build .

CMD ["bash"]

# Final image
FROM centos

WORKDIR /root

COPY --from=compile /root/build/build/bin /usr/bin/
COPY --from=compile /root/build/build/lib/libtaos.so /usr/lib/
COPY packaging/cfg/taos.cfg /etc/taos/

RUN echo "charset UTF-8" >> /etc/taos/taos.cfg

ENV LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/lib"
ENV LC_CTYPE="en_US.UTF-8"

# origin blog missing this
ENV LC_ALL="C"

CMD ["taosd"]

create TDengine image

1
docker build -t tdtest .

start service

create volumn

1
2
docker volume create td_log_vol
docker volume create td_data_vol

start

1
docker run -itd --name tdtest_run -v td_log_vol:/var/log/taos -v td_data_vol:/var/lib/taos tdtest

if service start fail then check logs

1
2
docker ps -a
docker logs -f -t --tail 50 tdtest_run

Error info like this

1
2
3
4
5
2019-10-15T00:51:45.614929519Z TDengine:[1]: Starting TDengine service...
2019-10-15T00:51:45.614966105Z 10/15 00:51:45.614216 7f6a24ce1b80 UTL timezone not configured, set to system default: (UTC, +0000)
2019-10-15T00:51:45.614975483Z 10/15 00:51:45.614410 7f6a24ce1b80 ERROR UTL can't get locale from system
2019-10-15T00:51:45.614981522Z 10/15 00:51:45.614486 7f6a24ce1b80 UTL Invalid locale:, please set the valid locale in config file
2019-10-15T00:51:45.770437005Z Invalid locale:, please set the valid locale in config file

the 1th process is not /sbin/init, so systemctl command can’t execute.
for fix this need option --privileged=true and parameter /sbin/init, it’s not right.

1
docker run -itd --name tdtest_run --privileged=true -v td_log_vol:/var/log/taos -v td_data_vol:/var/lib/taos tdtest /sbin/init

I met a problem here, the docker process started, but taosd server process start failed.
At first i thought the datetimectl command can fix this, i try to add
RUN ln -snf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo 'Asia/Shanghai' > /etc/timezone
in Dockerfile and rebuild image but failed.

Then i try to find the source code where the exception be throw.
It’s in TDengine/src/util/src/tglobalcfg.c , the c function setlocale() read location failed.
I write a simple c file, to valid this problem, then search in internet.
Finally, find a similar problem in other software, the answer is very obviously : caused by environment variable.

When add the variable export LC_ALL=C, the client part taos can start up, the demo c file can printf locale normally.
So, add ENV LC_ALL="C" in Dockerfile and rebuild image, start the container taosd successfully.

1
docker run -itd --name tdtest_run -v td_log_vol:/var/log/taos -v td_data_vol:/var/lib/taos tdtest

https://blog.csdn.net/weixin_34038652/article/details/86236240
http://c.biancheng.net/ref/setlocale.html
https://www.runoob.com/cprogramming/c-function-setlocale.html

check

1
2
3
4
# visit container
docker exec -it tdtest_run /bin/bash
# start tdengine client
taos

Experiment

code prepare

1
2
3
4
5
docker cp test.tar.gz tdtest_run:/root
docker exec -it tdtest_run bash
cd /root
tar -zxvf test.tar.gz
cd test

test.tar.gz

insert

modify config/config.sh, modify table num, table record num

1
./test.sh -F config/config.sh -f config/tdengine.sh

result:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
You are testing TDengine...
TEST INFORMATION
======================================
Databse : TDENGINE
Action : Insert
Schema file : /root/test/data/schema.txt
Sample file : /root/test/data/sample.txt
Insert thread : 10
Detectors : 10000
Records per detector : 10000
Start time : 01/01/2015 00:00:00
Time interval : 900000

Config dir : /etc/taos/
Host :
User name : root
Password : taosdata
DB name : meterInfo
Table prefix : meter
STable : meters
DB property : days 30 tblocks 500 tables 5000
Records per request : 200
Insert mode : 0
Real situation : 0
======================================
Starting to test...
days 30 tblocks 500 tables 5000
Creating 10000 tables......
TDengine tables are created. Sleep 2 seconds and starting to insert data...

Inserting data......
ThreadID: 0 start table ID: 0 end table ID: 999
ThreadID: 1 start table ID: 1000 end table ID: 1999
ThreadID: 2 start table ID: 2000 end table ID: 2999
ThreadID: 3 start table ID: 3000 end table ID: 3999
ThreadID: 4 start table ID: 4000 end table ID: 4999
ThreadID: 5 start table ID: 5000 end table ID: 5999
ThreadID: 6 start table ID: 6000 end table ID: 6999
ThreadID: 7 start table ID: 7000 end table ID: 7999
ThreadID: 8 start table ID: 8000 end table ID: 8999
ThreadID: 9 start table ID: 9000 end table ID: 9999
Done! Spent 70.6098 seconds to insert 100000000 records, speed: 1416233.89 R/s
Test done!

query

1
2
docker exec -it tdtest_run bash
taos

result:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Welcome to the TDengine shell from linux, client version:1.6.2.2, server version:1.6.2.2
Copyright (c) 2017 by TAOS Data, Inc. All rights reserved.

taos> select count(*) from meterinfo.meters;
count(*) |
======================
100000000|
Query OK, 1 row(s) in set (0.135594s)

taos> select count(*) from meterinfo.meters group by loc;
count(*) | loc |
===============================================================
40000000| |
20000000| |
10000000| |
30000000| |
Query OK, 4 row(s) in set (0.163399s)

parameter

config/config.sh data config

  • SCHEMA_FILE: schema config file, test/data/schema.txt。
  • SAMPLE_FILE: sample data file, need echo each other with schema.txt, test program will loop write in these data.
  • NDETECTORS: table number
  • INSERT_THREAD: thread number
  • RECORDS_PER_DETECTOR: records number in one table
  • START_TIME: start time
  • TIME_INTERVAL: data collect interval, unit in millisecond.

config/tdengine.sh engine config

  • INSERT_DB_NAME: db name
  • TB_PREFIX: table name = prefix + num
  • STABLE: super table name
  • DB_PROPERTY: database option
  • RECORDS_PER_REQUEST: recored num in one insert command, insert limit in 64K

https://github.com/taosdata/TDengine
https://github.com/taosdata/TDengine/blob/v1.6/src/util/src/tglobalcfg.c
https://blog.csdn.net/qishidiguadan/article/details/96284529
https://blog.csdn.net/u013829518/article/details/99681154
https://blog.csdn.net/u012954706/article/details/82588687

proxy in docker

Posted on 2019-09-26 | Edited on 2020-09-17 | In linux
  1. Create directory
1
mkdir -p /etc/systemd/system/docker.service.d
  1. create proxy file
1
vim /etc/systemd/system/docker.service.d/http-proxy.conf
  1. modify file
1
2
[Service]
Environment="HTTPS_PROXY=http://username:password@192.168.1.1:8080" "NO_PROXY=localhost,127.0.0.1"
  1. save & flush
1
2
systemctl daemon-reload
systemctl restart docker
  1. verify
1
systemctl show --property=Environment docker

https://www.cnblogs.com/atuotuo/p/7298673.html
https://docs.docker.com/config/daemon/systemd/#httphttps-proxy

Learning jackson

Posted on 2019-09-11 | Edited on 2019-10-15 | In java

JS number type accuracy problem

In js number type is 53 bit, if backgroud response 64 bit long type,
js will lose accuracy, number will not be the origin number.

53 bit in decimal system length is about 16(2^52),but in java long type is about 20(2^64).
So backgroud return type in string is a better choice to void accuracy problem.

JsonSerializer JsonDeserializer

Avoiding JS loss of accuracy

1
2
3
4
5
6
7
8
public class LongJsonSerializer extends JsonSerializer<Long> {
@Override
public void serialize(Long value, JsonGenerator jsonGenerator, SerializerProvider serializerProvider) throws IOException {
String text = (value == null ? null : String.valueOf(value));
if (text != null)
jsonGenerator.writeString(text);
}
}

Deserialize input model

1
2
3
4
5
6
7
8
9
10
11
12
13
@Slf4j
public class LongJsonDeserializer extends JsonDeserializer<Long> {
@Override
public Long deserialize(JsonParser jsonParser, DeserializationContext deserializationContext) throws IOException {
String value = jsonParser.getText();
try {
return value == null ? null : Long.parseLong(value);
} catch (NumberFormatException e) {
log.error("Deserialize long type error: {}", value);
return null;
}
}
}

Annotate the target property

1
2
3
@JsonSerialize(using = LongJsonSerializer.class)
@JsonDeserialize(using = LongJsonDeserializer.class)
private Long id;

Data Binding [Commonly used]

Maven

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<!--jackson-->
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-core</artifactId>
<version>2.9.5</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-annotations</artifactId>
<version>2.9.5</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.9.5</version>
</dependency>

Model

1
2
3
4
5
6
7
@Data
public class Country {
private Integer id;
private String countryName;
private List<Province> provinces;
private String[] lakes;
}
1
2
3
4
5
6
@Data
public class Province {
private Integer id;
private String provinceName;
private List<City> cities;
}
1
2
3
4
5
6
7
@Data
@AllArgsConstructor
@NoArgsConstructor
public class City {
private Integer id;
private String cityName;
}

Bean2JsonStr

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
@Test
public void Bean2JsonStr() throws IOException {
// Convert the object to Json
ObjectMapper mapper = new ObjectMapper();
// set format
SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd");
mapper.setDateFormat(dateFormat);
City city1 = new City(1, "hangzhou");
City city2 = new City(2, "taizhou");

Province province = new Province();
province.setId(1);
List<City> cities = new ArrayList<City>();
cities.add(city1);
cities.add(city2);
province.setCities(cities);

Country country = new Country();
country.setCountryName("China");
country.setId(1);
country.setLakes(new String[]{"Qinghai Lake", "Poyang Lake", "Dongting Lake", "Taihu Lake"});
List<Province> provinces = new ArrayList<Province>();
provinces.add(province);
country.setProvinces(provinces);
// set true make json readable, no need in produce environment
mapper.configure(SerializationFeature.INDENT_OUTPUT, true);
// ignore null property
mapper.setSerializationInclusion(JsonInclude.Include.NON_EMPTY);
// java property as key in default, customize key name in @JsonProperty annotation
mapper.writeValue(new File("country-demo.json"), country);
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
"id" : 1,
"countryName" : "China",
"provinces" : [ {
"id" : 1,
"cities" : [ {
"id" : 1,
"cityName" : "hangzhou"
}, {
"id" : 2,
"cityName" : "taizhou"
} ]
} ],
"lakes" : [ "Qinghai Lake", "Poyang Lake", "Dongting Lake", "Taihu Lake" ]
}

JsonStr2Bean

string -> object

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
@Test
public void JsonStr2Bean() throws IOException {
ObjectMapper mapper = new ObjectMapper();
File jsonFile = new File("country-demo.json");
// not interrupt unknown properties
mapper.disable(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES);
Country country = mapper.readValue(jsonFile, Country.class);

log.info(country.getCountryName());
List<Province> provinces = country.getProvinces();
for (Province province : provinces) {
for (City city : province.getCities()) {
log.info(city.getId() + " " + city.getCityName());
}
}
}

1
2
3
09:52:05.570 [main] INFO jackson.JacksonDemoTester - China
09:52:05.574 [main] INFO jackson.JacksonDemoTester - 1 hangzhou
09:52:05.575 [main] INFO jackson.JacksonDemoTester - 2 taizhou

JsonStr2List

string -> list

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
@Test
public void JsonStr2List() throws IOException {
City city1 = new City(1, "hangzhou");
City city2 = new City(2, "taizhou");

List<City> cities = new ArrayList<>();
cities.add(city1);
cities.add(city2);

ObjectMapper mapper = new ObjectMapper();
String listJsonStr = mapper.writeValueAsString(cities);
log.info(listJsonStr);
List<City> list = mapper.readValue(listJsonStr, new TypeReference<List<City>>() {});
for (City city : list) {
log.info("id:" + city.getId() + " cityName:" + city.getCityName());
}
}

1
2
3
09:52:15.610 [main] INFO jackson.JacksonDemoTester - China
09:52:15.612 [main] INFO jackson.JacksonDemoTester - 1 hangzhou
09:52:15.613 [main] INFO jackson.JacksonDemoTester - 2 taizhou

Streaming API [high performence]

self define paser

JsonSerializer

1
2
3
4
5
6
7
8
9
10
public class CityJsonSerializer extends JsonSerializer<City> {
@Override
public void serialize(City city, JsonGenerator jsonGenerator, SerializerProvider arg2) throws IOException {
jsonGenerator.writeStartObject();
if ( city.getId()!=null)
jsonGenerator.writeNumberField("id", city.getId());
jsonGenerator.writeStringField("cityName", city.getCityName());
jsonGenerator.writeEndObject();
}
}

JsonDeserializer

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
@Slf4j
public class CityJsonDeserializer extends JsonDeserializer<List<City>> {

@Override
public List<City> deserialize(JsonParser parser, DeserializationContext deserializationcontext) throws IOException {
List<City> list = new ArrayList<>();
// deserialize array, fist token must be JsonToken.START_ARRAY '['
if (!JsonToken.START_ARRAY.equals(parser.getCurrentToken())) {
log.info(parser.getCurrentToken().asString());
return null;
}
// until the EOF
while (!parser.isClosed()) {
// loop until the target token
JsonToken token = parser.nextToken();
if (token == null) break;
// every element in array is object, so the nect JsonToken is JsonToken.START_OBJECT '{'
if (!JsonToken.START_OBJECT.equals(token)) break;

City city = null;
while (true) {
if (JsonToken.START_OBJECT.equals(token))
city = new City();

token = parser.nextToken();
if (token == null) break;

if (JsonToken.FIELD_NAME.equals(token)) {
if ("id".equals(parser.getCurrentName())) {
token = parser.nextToken();
city.setId(parser.getIntValue());
} else if ("cityName".equals(parser.getCurrentName())) {
token = parser.nextToken();
city.setCityName(parser.getText());
}
}
if (JsonToken.END_OBJECT.equals(token))
list.add(city);
}
}
return list;
}
}

StreamJsonStr2List

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
@Test
public void StreamJsonStr2List() throws IOException {
City city1 = new City();
city1.setCityName("hangzhou");
City city2 = new City(2, "taizhou");

List<City> cities = new ArrayList<>();
cities.add(city1);
cities.add(city2);

ObjectMapper mapper = new ObjectMapper();
SimpleModule module = new SimpleModule();
module.addSerializer(City.class, new CityJsonSerializer());
mapper.registerModule(module);
String listJsonStr = mapper.writeValueAsString(cities);

log.info(listJsonStr);

ObjectMapper mapper2 = new ObjectMapper();
SimpleModule module2 = new SimpleModule();
module2.addDeserializer(List.class, new CityJsonDeserializer());
mapper2.registerModule(module2);
List<City> list = mapper2.readValue(listJsonStr, new TypeReference<List<City>>() {});

for (City city : list) {
log.info("id:" + city.getId() + " cityName:" + city.getCityName());
}
}
1
2
3
10:15:03.874 [main] INFO jackson.JacksonDemoTester - [{"cityName":"hangzhou"},{"id":2,"cityName":"taizhou"}]
10:15:03.889 [main] INFO jackson.JacksonDemoTester - id:null cityName:hangzhou
10:15:03.889 [main] INFO jackson.JacksonDemoTester - id:2 cityName:taizhou

use annotation to avoid module in code

1
2
3
4
@JsonSerialize(using=CityJsonSerializer.class)
public class City {
...
}

Tree Model [flexible]

TreeMode2Json

no java class/ pojo, create model by tree model

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
@Test
public void TreeMode2Json() throws IOException {

// create a factory provide node
JsonNodeFactory factory = new JsonNodeFactory(false);
// create a factory to convert tree model to json
JsonFactory jsonFactory = new JsonFactory();
//create a json generator
JsonGenerator generator = jsonFactory.createGenerator(new FileWriter(new File("country-demo2.json")));

ArrayNode cities = factory.arrayNode();
cities.add(factory.objectNode().put("id", 1).put("cityName", "hangzhou"))
.add(factory.objectNode().put("id", 2).put("cityName", "taizhou"));

ArrayNode provinces = factory.arrayNode();
ObjectNode province = factory.objectNode();
province.put("cities", cities);
province.put("provinceName", "zhejiang");
provinces.add(province);

ObjectNode country = factory.objectNode();
country.put("id", 1).put("countryName", "China");
country.put("provinces", provinces);

// caution! in default mapper not set root node
ObjectMapper mapper = new ObjectMapper();
mapper.setSerializationInclusion(JsonInclude.Include.NON_EMPTY);
mapper.writeTree(generator, country);
}

TreeModeReadJson

1
2
3
4
5
6
7
8
9
10
11
12
13
14
@Test
public void TreeModeReadJson() throws IOException{
ObjectMapper mapper = new ObjectMapper();
JsonNode node = mapper.readTree(new File("country-demo2.json"));
// show node type
log.info("node JsonNodeType:"+node.getNodeType());
log.info("----------------sub node name----------------------");
Iterator<String> fieldNames = node.fieldNames();
while (fieldNames.hasNext()) {
String fieldName = fieldNames.next();
log.info(fieldName+" ");
}
log.info("---------------------------------------------------");
}
1
2
3
4
5
6
10:58:44.995 [main] INFO jackson.JacksonDemoTester - node JsonNodeType:OBJECT
10:58:44.999 [main] INFO jackson.JacksonDemoTester - ----------------sub node name----------------------
10:58:44.999 [main] INFO jackson.JacksonDemoTester - id
10:58:44.999 [main] INFO jackson.JacksonDemoTester - countryName
10:58:44.999 [main] INFO jackson.JacksonDemoTester - provinces
10:58:45.000 [main] INFO jackson.JacksonDemoTester - ---------------------------------------------------

https://www.cnblogs.com/lvgg/p/7475140.html
https://www.cnblogs.com/williamjie/p/9242451.html
https://blog.csdn.net/gjb724332682/article/details/51586701
https://blog.csdn.net/java_huashan/article/details/46375857

docker in centeros7

Posted on 2019-08-28 | Edited on 2019-10-15 | In linux

install

update yum package first
yum update

remove old version
yum remove docker docker-common docker-selinux docker-engine

install dependency and yum-config-manager in yum-utils

yum install -y yum-utils device-mapper-persistent-data lvm2

set yum repository
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

install
yum install docker-ce docker-ce-cli containerd.io

problem

Requires: container-selinux >= 2.7.4

1
2
3
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
yum install epel-release
yum install container-selinux

https://docs.docker.com/install/linux/docker-ce/centos/
https://blog.csdn.net/cai454692590/article/details/83479000
https://blog.csdn.net/renzhixin1314/article/details/88604096

proxy in linux

Posted on 2019-08-27 | Edited on 2020-09-17 | In linux

global proxy

in file /etc/profile set

1
2
3
4
5
export proxy="http://user:pwd1@192.168.3.1:8848"
export http_proxy=$proxy
export https_proxy=$proxy
export ftp_proxy=$proxy
export no_proxy="localhost, 127.0.0.1, ::1"

unset command

1
2
3
4
unset http_proxy
unset https_proxy
unset ftp_proxy
unset no_proxy

load variables
source /etc/profile

yum proxy

in file /etc/yum.conf set

1
2
3
proxy=http://192.168.3.1:8848
proxy_username=user
proxy_password=pwd

npm proxy

set command

1
2
npm config set proxy http://user:pwd@192.168.3.1:8848
npm confit set https-proxy http://user:pwd@192.168.3.1:8848

unset command

1
2
npm config delete proxy
npm config delete https-proxy

npm setting
https://segmentfault.com/a/1190000002589144

maven proxy

1
2
3
4
5
6
7
8
9
10
11
<proxy>
<id>optional</id>
<active>true</active>
<protocol>http</protocol>
<username>user</username>
<password>pwd</password>
<host>192.168.3.1</host>
<port>8080</port>
<nonProxyHosts>localhost|127.0.0.1</nonProxyHosts>
</proxy>
</proxies>

https://www.cnblogs.com/EasonJim/p/9826681.html
https://blog.csdn.net/yanzi1225627/article/details/80247758

1…8910…29

Leon

282 posts
20 categories
58 tags
GitHub
Links
  • clock
  • typing-cn
  • mathjax
  • katex
  • cron
  • dos
  • keyboard
  • regex
  • sql
  • toy
© 2017 – 2024 Leon
Powered by Hexo v3.9.0
|
Theme – NexT.Muse v7.1.2