Quantcast
Channel: 钻戒 and 仁豆米
Viewing all 290 articles
Browse latest View live

Mysql建库备忘

$
0
0

总是忘记Mysql新装后的建库utf8脚本,记录一下,备忘

vi gogs.sql  
DROP DATABASE IF EXISTS gogs;  
CREATE DATABASE IF NOT EXISTS gogs CHARACTER SET utf8 COLLATE utf8_general_ci;

mysql -u root -pyour_password < gogs.sql  

go语言安装与配置

$
0
0

首先下载2进制包

wget https://dl.google.com/go/go1.9.3.linux-amd64.tar.gz  

编辑~/.bashrc

export GOPATH=/home/git/go  
export GOROOT=/usr/local/src/go  
export PATH=${PATH}:$GOROOT/bin  

解压:

tar zxf go1.9.3.linux-amd64.tar.gz  
mv go $GOROOT  
go  

ok, 从github取得gogs的源代码并编译

go get -d github.com/gogits/gogs  
cd $GOPATH/src/github.com/gogits/gogs  
go build  

搞定

Ubuntu下如何固定选择老的核心启动

$
0
0

不知道为何,Ubuntu系统自动升级,弄了好多个核心。

结果启动都是异常,必须用老的4.10的核心才能正常启动,Nvidia显卡的驱动才能正常加载。

那么如何固定Ubuntu启动的时候选择老核心启动呢?

很简单

vi /etc/default/grub  
GRUB_DEFAULT=  
把上面更换为
GRUB_DEFAULT="Advanced options for Ubuntu>Ubuntu, with Linux 4.10.0-42-generic"  
sudo update-grub  

解释一下,Ubuntu用的是grub2,就不能简单编辑grub.cfg来指定启动顺序了。

首先是Advanced options for Ubuntu,然后下面的子菜单Ubuntu, with Linux 4.10.0-42-generic

Advanced optins For Ubuntu是Advance那一项的折叠菜单全称

子菜单可以在/boot/grub/grub.cfg中找到:

 menuentry 'Ubuntu, with Linux 4.10.0-42-generic' ...

搞定。

F5的SSL每秒传输数限制

$
0
0

公司用到了SSL的泛域名证书,网站整体套上了HTTPS,然后最前面是F5做SSL的卸载。

麻烦也来了,F5的SSL Transactions Per Second (TPS) 是有license的,首先检查一下吧

tmsh show sys license detail | grep -i perf_SSL_total_TPS  
perf_SSL_total_TPS [500]  

显示是500

还得查查有几个核心

tmsh show sys tmm-info global | grep -i 'TMM count'  
TMM Count               4  

4个核心

那么每秒SSL的TPS限制就是 500X4=2000

超过2000就得去增加license了。

做个postfix发送邮件以及邮件黑洞

$
0
0

配置postfix能自由转发内网的邮件: 就改一个地方即可

vi /etc/postfix/main.cf  
...
mynetworks = 127.0.0.0/8, 172.16.0.0/16  
...

配置个黑洞,所有邮件都受到,然后drop掉

relayhost =  
relay_transport = relay  
relay_domains = static:ALL  
smtpd_end_of_data_restrictions = check_client_access static:discard  

当然,也可以把这些邮件都给送到amavis去,训练找出垃圾邮件

测试邮件发送的命令:

echo "body of your email" | mail -s "This is a Subject os version" -r "abc@kindlefan.xin" test@abc.com  

注意:CentOS和Ubuntu居然都没有mail这条命令的话

yum install mailx  
apt-get install bsd-mailx  

Ext4分区的缩小

$
0
0

装好了virtualizor,准备测试Xen的虚机环境

结果发现杯具了,Xen的模板只能用于lvm环境,分区的时候只分了/和swap,空间全用掉了,没有建lvm的地方了。

没办法,只能缩小/ ext4分区

步骤如下:

查看分区是什么文件类型

file -sL /dev/sd*  
/dev/sda:  x86 boot sector; GRand Unified Bootloader, stage1 version 0x3, boot drive 0x80, 1st sector stage2 0x2044148, GRUB version 0.94; partition 1: ID=0x82, starthead 32, startsector 2048, 8388608 sectors; partition 2: ID=0x83, active, starthead 75, startsector 8390656, 411039744 sectors, code offset 0x48
/dev/sda1: Linux/i386 swap file (new style) 1 (4K pages) size 1048575 pages
/dev/sda2: Linux rev 1.0 ext4 filesystem data (needs journal recovery) (extents) (large files) (huge files)

看出来要缩的分区是/dev/sda2,文件类型是ext4

ok,由于这台机器是kvm的虚机,所以修改一下,加个iso进入rescue模式

virsh edit xxx  
把boot顺序从hd改成cdrom
...
    <boot dev='hd'/>
    <boot dev='cdrom'/>
...
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/export/kvm/iso/CentOS-7-x86_64-NetInstall-1708.iso'/>
      <target dev='hdc' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='1' target='0' unit='0'/>
    </disk>
...

系统启动,进入cdrom安装,选择Troubleshooting

选择Rescue

因为要对磁盘操作,所以不能选1把硬盘mount到/mnt/sysimage,选择3,不mount硬盘,直接进入一个shell

检查磁盘,缩减/dev/sda2到5G空间

e2fsck -f /dev/sda2  
resize2fs /dev/sda2 5G  

注意,现在只做了一半,文件是被集中到/dev/sda2的前5G空间里面去了。但是,硬盘分区还没有更改呢

接着来,用parted来修改分区

parted /dev/sda  
print  
查看后发现前2096是swap分区,2是/dev/sda2
删除/dev/sda2
rm 2  
重建
mkpart  
... primary
... 2096
... 7096

注意,新建的/dev/sda2起点是2096,分区终点是2096+5000=7096

最后再运行一下磁盘检查和重建

e2fsck -f /dev/sda2  
resize2fs /dev/sda2 5G  

重启搞定!

Ubuntu下MongoDB不同版本的安装

$
0
0

MongoDB 2.6.9的安装

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10  
echo 'deb http://downloads-distro.mongodb.org/repo/debian-sysvinit dist 10gen' | sudo tee /etc/apt/sources.list.d/mongodb.list  
sudo apt-get update  
sudo apt-get install -y mongodb-org=2.6.9 mongodb-org-server=2.6.9 mongodb-org-shell=2.6.9 mongodb-org-mongos=2.6.9 mongodb-org-tools=2.6.9  

MongoDB 3.2

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv EA312927  
echo "deb http://repo.mongodb.org/apt/ubuntu trusty/mongodb-org/3.2 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.2.list  
sudo apt-get update  
sudo apt-get install -y mongodb-org

官方文档地址:
https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/

基本测试:
直接输入mongo看一看,缺省是连到test库去了

增加用户基本操作:

进入mongo命令行:
$ mongo

建立explorerdb库:
> use explorerdb

建立一个可以读写explorerdb库的用户:
> db.createUser( { user: "explorer", pwd: "3xp!0reR", roles: [ "readWrite" ] } )
注意:如果用的是 mongo shell 2.4.x, 命令是addUser而不是createUser
> db.addUser( { user: "username", pwd: "password", roles: [ "readWrite"] })

Freelancer的任务之一:多IP多重匿名代理加认证

$
0
0

Freelancer上有个proxy setup的任务:

Project Description  
Hi, I would like to create a new VPS proxy server with multiple IPs on the same VPS.  
Are you able to that?  
What OS do you prefer?  
- I have 10 ips.
- I want them to be as much anonymous as you can.
- I would have an Username and Password as auth, but if needed the possibility to have an IP authentication too.
My budget is: ~20$  
Thank you in advance.  

说老实话,10个IP没必要,一个IP足够了,用Tor+polipo即可。

Tor的部分,用完全命令行加参数启动:

$ mkdir -p /tmp/tor10001/data
$ /usr/bin/tor SOCKSPort 10001 CONTROLPort 20001 DATADirectory /tmp/tor10001/data

Polipo的部分,也完全用命令行启动:

$ polipo socksParentProxy="127.0.0.1:10001" proxyPort=30001 proxyAddress="0.0.0.0" authCredentials="username:password"

组合起来,弄成一个脚本:

$ mkdir -p /tmp/tor10002/data
$ nohup /usr/bin/tor SOCKSPort 10002 CONTROLPort 20002 DATADirectory /tmp/tor10002/data &
$ nohup polipo socksParentProxy="127.0.0.1:10002" proxyPort=30002 proxyAddress="0.0.0.0" authCredentials="username:password" &

开100个Tor+Polipo:

#!/bin/bash

a=10001  
b=20001  
c=30001  
n=10100

echo "Start multiple Tors+polipo"  
echo "Begin port " $a  
echo "End port " $n

while [ $a -le $n ]  
do  
    echo "Start Tor on port" $a
    mkdir -p /tmp/tor$a/data
    nohup /usr/bin/tor SOCKSPort $a CONTROLPort $b DATADirectory /tmp/tor$a/data &
    echo "Start Polipo on port" $c
    nohup polipo socksParentProxy="127.0.0.1:$a" proxyPort=$c proxyAddress="0.0.0.0" authCredentials="username:password" &

    a=$(($a + 1)) 
    b=$(($b + 1)) 
    c=$(($c + 1))
done  

通杀Tor+Polipo的脚本

#!/bin/bash
ps aux | grep tor | awk '{print $2}' | xargs kill -9  
ps aux | grep polipo | awk '{print $2}' | xargs kill -9  

Centos 6.6 突然无法Git clone了

$
0
0

公司里的机器突然不能git clone了。

古怪。报错信息如下:

git clone https://github.com/httperf/httperf  
Cloning into 'httperf'...  
fatal: unable to access 'https://github.com/httperf/httperf/': SSL connect error  

见了鬼了,前几天还用的好好的,打开调试看看:

$> GIT_CURL_VERBOSE=1 git clone https://github.com/httperf/httperf
Cloning into 'httperf'...  
* Couldn't find host github.com in the .netrc file; using defaults
* About to connect() to github.com port 443 (#0)
*   Trying 192.30.253.113... * Connected to github.com (192.30.253.113) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
*   CAfile: /etc/pki/tls/certs/ca-bundle.crt
  CApath: none
* NSS error -12190
* Error in TLS handshake, trying SSLv3...
> GET /httperf/httperf/info/refs?service=git-upload-pack HTTP/1.1
User-Agent: git/2.4.8  
Host: github.com  
Accept: */*  
Accept-Encoding: gzip  
Accept-Language: en-US, *;q=0.9  
Pragma: no-cache

* Connection died, retrying a fresh connect
* Expire cleared
* Closing connection #0
* Issue another request to this URL: 'https://github.com/httperf/httperf/info/refs?service=git-upload-pack'
* Couldn't find host github.com in the .netrc file; using defaults
* About to connect() to github.com port 443 (#0)
*   Trying 192.30.253.113... * Connected to github.com (192.30.253.113) port 443 (#0)
* TLS disabled due to previous handshake failure
*   CAfile: /etc/pki/tls/certs/ca-bundle.crt
  CApath: none
* NSS error -12286
* Expire cleared
* Closing connection #0
fatal: unable to access 'https://github.com/httperf/httperf/': SSL connect error  

无妄之灾啊,SSL错误

没办法,更新一下nss吧

yum update -y nss curl  

就好了。

httperf的安装使用

$
0
0

先clone回来

git clone https://github.com/httperf/httperf  

准备环境,编译安装

yum install lib-tools  
cd httperf  
mkdir m4  
autoreconf -i  
autoconf configure.ac  
./configure
make  

测试一把,参数不是一般的多啊

./httperf --server 192.168.92.151 --ssl --server-name abc.xyz.com  --rate 10 --num-conns=500

Freelancer任务之二:建一个scramble obfuscated opevpn

$
0
0

这个很奇怪撒,仔细查了下,原作者是这么说的:

I have created a patch which introduces some forms of scrambling to the packet payload of any OpenVPN connection.  
I have been successfully using the patch with Iranian and Chinese users for some time now.  

看来伊朗也比较糟糕啊。

无语,鉴于在森华易腾无法拨接openvpn,不知道是直接封了1194的udp端口,还是从协议上封掉了openvpn,总之,都很shit。

简单说就是对openvpn协议进行了混淆,多了一个配置项:

scramble 参数  
scramble reverse #对传输的数据进行反转,通常这一句就已经可以绕过China和Iran的检测机制了  
scramble xorptrpos #对传输的package中的有效数据进行xor运算  
scramble obfuscate password #更强烈的加密。反转+xor+密码三种方式全用上. "password" 是你设定的密码

用上这个配置项后,建议设置cipher none, 因为如此这般以后,没有必要再制定cipher方式了。另外,用cipher会消耗cpu,而采用scramble消耗cpu的程度比cipher低。

搭一个试试看 这里采用的是openvpn 2.4.4版本和相应的patch

下载:

wget http://img.rendoumi.com/soft/vpn/2.4.4.zip  
wget http://img.rendoumi.com/soft/vpn/master.zip  
unzip -x 2.4.4.zip  
unzip -x master.zip  

应用补丁并编译:

cd openvpn-release-2.4/  
git apply ../openvpn_xorpatch-master/openvpn_xor.patch  
autoreconf -i -v -f  
./configure --prefix=/export/servers/openvpn
make  
make install  

安装easy-rsa-3.0,不得不击节叫好啊,easy-rsa 3.0比2.0进化多了,就一个可执行文件,也轻省多了:

wget http://img.rendoumi.com/soft/vpn/easy-rsa.zip  
unzip -x easy-rsa.zip  

建立openvpn配置文件夹

mkdir -p /etc/openvpn/conf  
cp -r easy-rsa-master/easyrsa3/* /etc/openvpn  

看看新版easy-rsa-3.0都有什么命令

cd /etc/openvpn  
./easyrsa 

Easy-RSA 3 usage and overview

USAGE: easyrsa [options] COMMAND [command-options]

A list of commands is shown below. To get detailed usage and help for a  
command, run:  
  ./easyrsa help COMMAND

For a listing of options that can be supplied before the command, use:  
  ./easyrsa help options

Here is the list of commands available with a short syntax reminder. Use the  
'help' command above to get full usage details.

  init-pki
  build-ca [ cmd-opts ]
  gen-dh
  gen-req <filename_base> [ cmd-opts ]
  sign-req <type> <filename_base>
  build-client-full <filename_base> [ cmd-opts ]
  build-server-full <filename_base> [ cmd-opts ]
  revoke <filename_base>
  gen-crl
  update-db
  show-req <filename_base> [ cmd-opts ]
  show-cert <filename_base> [ cmd-opts ]
  import-req <request_file_path> <short_basename>
  export-p7 <filename_base> [ cmd-opts ]
  export-p12 <filename_base> [ cmd-opts ]
  set-rsa-pass <filename_base> [ cmd-opts ]
  set-ec-pass <filename_base> [ cmd-opts ]

DIRECTORY STATUS (commands would take effect on these locations)  
  EASYRSA: .
      PKI:  /etc/openvpn/pki

简单明了,一目了然,来吧,一气呵成

cd /etc/openvpn  
./easyrsa init-pki
./easyrsa --batch build-ca nopass
./easyrsa --batch build-server-full server nopass
./easyrsa --batch build-client-full client1 nopass
./easyrsa gen-dh

什么都不用管,就全弄好了,比起easy-rsa 2.0一堆脚本,修改vars,省事多了!!!

准备server端的配置文件:

cd /etc/openvpn/  
cp pki/ca.crt pki/dh.pem pki/private/client1.key pki/private/server.key issued/* /etc/openvpn/conf  
cd /etc/openvpn/conf  
/export/servers/openvpn/sbin/openvpn --genkey --secret ta.key

这样/etc/openvpn/conf下就会有7个文件

ca.crt  
server.key  
client1.key  
client1.crt  
dh.pem  
server.crt  
ta.key  

准备个模板:

cat<<EOF>>/etc/openvpn/conf/server.conf  
port 1194  
proto udp  
dev tun

server 10.8.0.0 255.255.255.0

scramble obfuscate fuckfuckfuck

ca /etc/openvpn/conf/ca.crt  
cert /etc/openvpn/conf/server.crt  
key /etc/openvpn/conf/server.key  
tls-auth /etc/openvpn/ta.key 0  
dh /etc/openvpn/conf/dh.pem  
cipher none

#push "route 172.16.0.0 255.255.0.0"

client-to-client  
comp-lzo

persist-key  
persist-tun

user nobody  
group nobody

ifconfig-pool-persist /etc/openvpn/conf/ipp.txt  
status      /var/log/openvpn-status.log  
log         /var/log/openvpn.log  
log-append  /var/log/openvpn.log

tun-mtu 1500  
tun-mtu-extra 32  
mssfix 1450  
keepalive 5 30

verb 3  
EOF  

启动server端

/export/servers/openvpn/sbin/openvpn --config /etc/openvpn/server.conf --daemon

准备客户端文件

cat<<EOF>>/etc/openvpn/conf/client1.ovpn  
client  
dev tun  
proto udp  
remote change_this_to_server_address 1194  
scramble obfuscate fuckfuckfuck  
resolv-retry infinite  
nobind  
persist-key  
persist-tun  
user nobody  
group nogroup  
ca ca.crt  
cert client1.crt  
key client1.key  
tls-auth ta.key 1  
remote-cert-tls server  
cipher none  
comp-lzo  
tun-mtu 1500  
tun-mtu-extra 32  
mssfix 1450  
keepalive 5 30  
verb 3  
EOF  

合并出一个单独的客户端文件 注意merge.sh里面文件的配置:
ca="ca.crt"
cert="client1.crt"
key="client1.key"
tlsauth="ta.key"
ovpndest="client1.ovpn"

cd /etc/openvpn/conf  
wget http://img.rendoumi.com/soft/vpn/merge.sh  
chmod 755 merge.sh  
./merge.sh

这样就会合并出一个client1.ovpn客户端连接文件来,全部合一,其实server.conf也可以把所有东西包括进去

client  
dev tun  
proto udp  
remote change_this_to_server_address 1194  
scramble obfuscate fuckfuckfuck  
resolv-retry infinite  
nobind  
persist-key  
persist-tun  
remote-cert-tls server  
cipher none  
comp-lzo  
verb 3  
key-direction 1  
<ca>  
-----BEGIN CERTIFICATE-----
MIIDKzCCAhOgAwIBAgIJAOG5arbs5t9RMA0GCSqGSIb3DQEBCwUAMBMxETAPBgNV  
BAMTCENoYW5nZU1lMB4XDTE4MDMyODAzNDkyMloXDTI4MDMyNTAzNDkyMlowEzER  
MA8GA1UEAxMIQ2hhbmdlTWUwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB  
AQDMOUuQg49OstGbfPLjTgzwb5YBmBeVxyF3+5jmKbgPXujZ3dvdBwxaslVUwre6  
XsMUBz3vbB7Kf1BBDHe2jt60p2x2O+ptTb3rRhTPLhdhd9C3HUhwkNYc7jv1+ua3  
sUlwiYikltKhXGVU3e/XYB+Aiw63mem4ex5T4kJ/KIKoulGhUsaOl9JtPPKbeIlV  
BgUzBLHNt/9bY7r7m2Fh0VmbD5p5YMZEGrg+WX0qzT4wKD/734VdxuAoFwd7as6s  
CH73w0ykscV7evUJEaNu1keTqgqG5SuE3HzQ1cmWSSeF84gUes+l2JAivpQ/XTkF  
wdLnq2caXVTMDF8t/Y1e8JfVAgMBAAGjgYEwfzAdBgNVHQ4EFgQU+SKBqluAW6hQ  
p8y9Q22ZBhkTw5IwQwYDVR0jBDwwOoAU+SKBqluAW6hQp8y9Q22ZBhkTw5KhF6QV  
MBMxETAPBgNVBAMTCENoYW5nZU1lggkA4blqtuzm31EwDAYDVR0TBAUwAwEB/zAL  
BgNVHQ8EBAMCAQYwDQYJKoZIhvcNAQELBQADggEBAL9ZqyMSrrJ2ss/5pQhUBw71  
nmjeT8DPg7Optiq02oAPdIo06WdJ77Y+mFypGKw8uUHp/h0mL5wBr6NBYbdw+5Lc  
vv4tCpOzzNW7PJngJWilIdvL1W+y3i3/AolSs7jAradaOQOpI23tOeQAQUmwchmt  
hvgKH8kyIWlOzxGIHdG9Spv8Oi1X6dwD0t4ddaNqcnCbyC2cBX4TvlXeVixMdBLY  
xq/5+G6dlJhaUzD4lG9Co7PTctwOFzKIP+mCrhLFCh7v5L6HCqL5ZLI7bWYTy0rm  
XURbleynyld95FKuul5YFRyb/j+I8iBd3Sw9TWhVuqKb4JX9n6zB1FxkNUX1r4g=  
-----END CERTIFICATE-----
</ca>  
<cert>  
-----BEGIN CERTIFICATE-----
MIIDRTCCAi2gAwIBAgIRALV3i3gqfdbfWujom75JgiwwDQYJKoZIhvcNAQELBQAw  
EzERMA8GA1UEAxMIQ2hhbmdlTWUwHhcNMTgwMzI4MDg1NjQ5WhcNMjgwMzI1MDg1  
NjQ5WjASMRAwDgYDVQQDEwdjbGllbnQxMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A  
MIIBCgKCAQEAqXq+oaXFyp24OBuXrAPRxnyg4t7eKl7jh4EmL+T2xnQ5qZfwDBz/  
0mI6MDgqPFDC8DWeO3iJZlNlIBNrHpza2kj53Fw7UB1yyi9fArt3Luj2HdjqXyDw  
yLTX6dVV/m+dP7Jq1OnnpaG7gbkjKaaS8inc79v1ismJK9ZAwaiQobv1T3Th7eL+  
nrKfjCJ/gevHfXocR7PuEe1CwyUEp124Z5fhq7S6JAgmt3WbiBVPIg5lp/pCyfbh  
K6z1Y5abPVCAJXTqgbaYBLIorO88wn5zn5D6ZFXDTdo3gJgQSlbax6AN5CqyK+Qi  
U2mF7Cf8+Ma+0eLbOFM62kulaqXX+uUojwIDAQABo4GUMIGRMAkGA1UdEwQCMAAw  
HQYDVR0OBBYEFJaAOw/CP8O/dnncm/VwlPow8kM9MEMGA1UdIwQ8MDqAFPkigapb  
gFuoUKfMvUNtmQYZE8OSoRekFTATMREwDwYDVQQDEwhDaGFuZ2VNZYIJAOG5arbs  
5t9RMBMGA1UdJQQMMAoGCCsGAQUFBwMCMAsGA1UdDwQEAwIHgDANBgkqhkiG9w0B  
AQsFAAOCAQEAmU9Y+dP4PH0eh4KMNW0QhseN0t0CK1Nzyu3hNcuIntns3J3VpJ1u  
1WKA16mnH8nLu2hNUKnWkOnuvPnwXIprWdg9Zvmct/QEtys4THnG3+5Ni7wVexhU  
lNU0qZcwGNwqQiZBrHcZZq6pAKtrAH0kD6/l5qCeScPrDIy6w3eFfGa/AJcEBNEN  
Wruj3hUQxRsv35XFfxEROaklfuLrfr0U1OlWDySSGMQafXjZCmLdxRb5IkI90255  
t3yksT9Bj7v/2n++ttlQTH0FK5zY7Uz76A21idiRCw/aVeXvJkafYqi+o/9kkVJh  
w+Q9Lm+AKGkaaMgz0dt0cmVZgHsnyzOzhQ==  
-----END CERTIFICATE-----
</cert>  
<key>  
-----BEGIN PRIVATE KEY-----
MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCper6hpcXKnbg4  
G5esA9HGfKDi3t4qXuOHgSYv5PbGdDmpl/AMHP/SYjowOCo8UMLwNZ47eIlmU2Ug  
E2senNraSPncXDtQHXLKL18Cu3cu6PYd2OpfIPDItNfp1VX+b50/smrU6eelobuB  
uSMpppLyKdzv2/WKyYkr1kDBqJChu/VPdOHt4v6esp+MIn+B68d9ehxHs+4R7ULD  
JQSnXbhnl+GrtLokCCa3dZuIFU8iDmWn+kLJ9uErrPVjlps9UIAldOqBtpgEsiis  
7zzCfnOfkPpkVcNN2jeAmBBKVtrHoA3kKrIr5CJTaYXsJ/z4xr7R4ts4UzraS6Vq  
pdf65SiPAgMBAAECggEAKwlSUzYHTfZTC1xmXXXy1RZcvH+fpt7FpGk1S0A3Mhnd  
cqV0fX73r3LmF8yLXRmdBuZ2sd9f9K4EpeqIbxOht4CEgmKhZSy1M4Zn+AemsjDS  
Hq4whcuVmUHi+iwEVEH/imdCHaLwAe1Z8g0TUsZL1lavFfGjHoUi4hDcDNFDOO5Z  
+gOL+ZLtAwCibcdTgdW7xXZMY6U4Mg4f7VggFqpuxe90ebaa1DHUYOm4XFQdrEOg
KtC93wkFKuz9fXvyCyjk5t3oXO3EQvLSsm0W1LhBYkdZp8fUmkgh7Lz2J/h7/qK9  
FYxbkvbFE7Zl1FE0g4kYNgMRq94Dy7IPhrbCXh1XUQKBgQDXYJt0KlfNdIQrZVsg  
kGkvE9eeEw3XhRCzsKIqnD3DkvkgowD6kpq9rU3tTg1x830QbfPu9L5cQEt5Hlsg  
zzCWKsjvC8Gnblz4ctvUvl+o9jbIKBf1aSykGGLZqB8rITd+gY8jcRDE637pxmKO  
HHhN0hjXyhSpjSCeWfvHHt4OdQKBgQDJcfq3nVmO6JBjn8Csywi9OHhodI4IKcLH  
EuEoJR0akv5l07UQGZXjkT60UUg5uAIU+z/Bk7UOErJckxvMCLzg9/O1ZRCEppdP  
GKMP4DM/xxdf37zEifBtFzBG9LCoIEJqRwzhaD7jyEg8jEv9G+ege/Bp5W9WDS2P  
9bWWF3DCcwKBgHN+0t4QdtUuTlIXIC7uQfmE4nNaNGoGaVZyugOvlU9zWTUvNC8q  
vuBINymyWXNp5v8Qd2cEx7Agqlhg9u05LgzZFLdbzpVCkYiJz2jeTd4FaosbNP3d  
UJsOmLOvfEdcoK2uPFv9Hcj7oCssv10F112j9L6DF2F01LEV//ZfjyShAoGAEHjo  
hoEwZJYx0GOszrRfh5GJjwkQ4CwCCGNL1AuM4LJqaQsxwBpHfm9PEFGhNU8NpIeT  
BBI++OKggR9qY3nHcCH2ZLvZ6O7yan5aPx8XMbzm9WkHN48MAO+ne/XgSC8zHxum  
OvxaQCgNeB4EzLKucxoPY6lmPEQhmKb/7UEHcG8CgYEAxhtREMFAL0+uDAJ72WLx  
qCEM0x9zet5DOLOqxUSlBJILAwwcgGA1DXdjMej/BxZbIHgZANZ3Gj+k3D5m4GQl  
Pe3DtI3HBLbVH8DeyZC6fJxjaNi16/mbD6puRpPs+w0D2pQwJr6k2uR8+G883RW5  
4vcpovBkutR5n1M09M3DyeU=  
-----END PRIVATE KEY-----
</key>  
<tls-auth>  
-----BEGIN OpenVPN Static key V1-----
79a3add18ba52b97045de864939a9a9e  
a0a07657bce8a0210c41b7d83d48ec48  
81c89db3dbec8b4bfc13424d3813711d  
f34a4770ebeaf181eeffcd3f38cea425  
78006c5b7506a5d9dcb0079daa3b3412  
5434af9df560f3a0d29bc8b333479943  
0f5839fee349f2079d03c9d31d6e2bf4  
26a32180c8e4f6c1579acbfef7596335  
a4147c64395ff77927ebe02f2a757d17  
a2df3245670c1eff89f9e1025dbc4b07  
8d3fcfaf4fbad44d9becf17f5d6d34ee  
50d616fb58bc0e29da54a934353701a9  
973df9b1f9041706642ff8ed00b24462  
5cb52768dd5472093855d0e8fa5b8762  
cca2aa48bda3d8964a19842fbf9d2081  
ff0075295379f663129723ee9319a789  
-----END OpenVPN Static key V1-----
</tls-auth>  

ok,把这个client1.ovpn拷贝出来,准备弄到windows上用

在windows上下载原始的openvpn-gui:

http://img.rendoumi.com/soft/vpn/openvpn-install-2.4.4-I601.exe  

然后看自己的系统是32位还是64位,下载对应的openvpn主文件

http://img.rendoumi.com/soft/vpn/openvpn-2.4-32.exe  
http://img.rendoumi.com/soft/vpn/openvpn-2.4-64.exe  

先安装好openvpn,然后到

C:\Program Files\OpenVPN\config  

把client1.ovpn放进去

然后以管理员身份启动桌面上的OpenVPN-GUI,右键点击连接就可以连上了。

Freelancer任务之三:Setup Proxy on VPS for Instagram

$
0
0

任务的要求是:

• Multiple subnets to avoid bans
•I need the proxies to have the ability of User:Pass
•Proxy needs to be Residential IPv6

还给出了一个参考:
https://www.blackhatworld.com/seo/never-buy-proxies-again-setup-your-own-proxy-server.872539/

恩,比较有意思。按照他给的连接:

第一步去 LowEndBox.com 或者 Webhostingtalk.com 去找一家口碑比较好,而且能提供附加ip的VPS供应商,通常附加一个IP是1$一个月。

第二步买个VPS,配置是1G内存,1个内核,100M带宽,并且附加10个IP。

这样的VPS一般是5$一个月,10$10个ip一个月,合计15$一个月,100元人民币,这样你就有11个IP可用了。

按这个任务的要求,需要Multiple subnet,你就从这家供应商的不同地点多买几台,比如洛杉矶1台,德州1台,纽约1台,然后每台附加10个IP

第三步就是安装Proxy软件了:

下载3Proxy

wget http://img.rendoumi.com/soft/3proxy/0.8.11.tar.gz  
tar zxvf 0.8.11.tar.gz  

编译安装:

cd 3proxy-0.8.11  
sed -i 's/^prefix.*/prefix=\/usr\/local\/3proxy/' Makefile.Linux  
sed -i '/DENY.*/a #define ANONYMOUS 1' src/proxy.h  
make -f Makefile.Linux  
make -f Makefile.Linux install  

注意上面我是安装到了/usr/local/3proxy,大家可以根据需求修改。

看看配置都是什么

cat cfg/3proxy.cfg.sample |grep -v ^# | grep -v ^$  
nserver 10.1.2.1  
nserver 10.2.2.2  
nscache 65536  
timeouts 1 5 30 60 180 1800 15 60  
users 3APA3A:CL:3apa3a "test:CR:$1$qwer$CHFTUFGqkjue9HyhcMHEe1"  
service  
log c:\3proxy\logs\3proxy.log D  
logformat "- +_L%t.%.  %N.%p %E %U %C:%c %R:%r %O %I %h %T"  
archiver rar rar a -df -inul %A %F  
rotate 30  
auth iponly  
external 10.1.1.1  
internal 192.168.1.1  
auth none  
dnspr  
auth strong  
deny * * 127.0.0.1,192.168.1.1  
allow * * * 80-88,8080-8088 HTTP  
allow * * * 443,8443 HTTPS  
proxy -n  
auth none  
pop3p  
tcppm 25 mail.my.provider 25  
auth strong  
flush  
allow 3APA3A,test  
maxconn 20  
socks  
auth strong  
flush  
internal 127.0.0.1  
allow 3APA3A 127.0.0.1  
maxconn 3  
admin  

一堆的废物配置啊,统统去掉

cat<<EOF>>/usr/local/3proxy/bin/3proxy.conf  
daemon  
timeouts 1 5 30 60 180 1800 15 60  
log /var/log/3proxy.log D  
logformat "- +_L%t.%.  %N.%p %E %U %C:%c %R:%r %O %I %h %T"  
rotate 30

users user:CL:pass

auth strong  
allow user  
proxy -p3128 -a -i172.16.8.1 -e172.16.8.1  
flush  
EOF  

有用的就是下面5行
users 定义了一个用户user,明文密码,密码是pass
auth 定义了需要认证
allow 定义了user用户可以访问
proxy -p端口 -a -i内网监听ip -e出口ip

ok了,然后启动:

cd /usr/local/3proxy/bin  
./3proxy 3proxy.conf

测试一下:

curl --proxy 172.16.8.1:3128 --proxy-user user:pass http://www.sina.com.cn  -vvv|more  

还有个需求,ipv6

格式如下
proxy -6 -n -a -p<PORT1> -i<IPv4> -e<IPv6>  
proxy -6 -n -a -p<PORT2> -i<IPv4> -e<IPv6>  
...
这么搞一下即可:
proxy -6 -n -a -p3128 -i172.16.8.1 -e2a02:26f0:4000:17d::2adb  

ok,搞定。

用nanopi-neo搭建一个自动翻墙设备

$
0
0

感谢Xuebing Wang老师的厚爱,加入了他的team,对嵌入式设备kernel进行了修改,从中才知道国人其实在嵌入式内核方面其实走在了世界前列

蒙赐一个nanopi-neo的小设备,家里本来是用hp5315的设备进行翻墙的,由于内核太低,无法使用ipset。所以迫不得已,升级到nanopi-neo

说一下过程:

一、下载官方映像文件nanopi-neo_FriendlyCore-Xenial_4.14.0_20171218.img.zip

注意,必须是4.14内核的,而不能是3.4.39内核的,因为ipset在3.4.39里缺xt_set模块

二、烧进8G的TF卡,ssh登录

nanopi缺省是DHCP启动的,扫描整个网段

nmap -p22 --open 192.168.92.0/24  

得到机器的大概ip,然后ssh上去,root密码是fa

三、安装ipset并检验一下

apt install ipset  
ipset list  
modporbe xt_set  
lsmod  

四、安装dnscrypt-proxy

apt install dnscrypt-proxy  

注意,这里dnscrypt-proxy是监听在127.0.2.1:53端口的,我们改一下,方法比较古怪,因为实际是systemctl控制了监听端口,直接改/etc/default/dnscrypt-proxy是无效的!!!

export SYSTEMD_EDITOR="vi"  
systemctl edit dnscrypt-proxy.socket  
输入以下内容
[Socket]
ListenStream=  
ListenDatagram=  
ListenStream=127.0.0.1:5353  
ListenDatagram=127.0.0.1:5353  

改好后重启,缺省用的是cisco这个开放dns

五、安装shadowsocks-libev
shadowsocks-libev是用c写的,占内存小,效率高,在这么小的个设备上,必须用这个

apt-get install software-properties-common -y  
add-apt-repository ppa:max-c-lv/shadowsocks-libev -y  
apt-get update  

编辑/etc/config.json文件

 {
    "server":"your-ss-server ip",
    "server_port":9001,
    "local_port":1080,
    "password":"type this user's password",
    "timeout":60,
    "method":"aes-256-cfb"
 }

运行:

ss-redir -c /etc/config.json  

同时单独建立一个DNS解析的通道,监听到本地端口4321,通过vps来转发请求到google的8.8.8.8:53

ss-tunnel -s your-ss-server-ip -p 9001 -m aes-256-cfb -k password -b 127.0.0.1 -l 4321 -L 8.8.8.8:53 -u  

这样本地就有了两个DNS,一个是dnscrypt-proxy,监听到5353,一个ss-tunnel,监听到4321

为什么要建立两个DNS呢?按道理是应该用4321,因为必须考虑就近原则,通过ss通道访问国外网站时,应该以vps返回的查询结果为准,dnscrypt-proxy很可能地址定位偏移了。

六、修改dns-masq
运行dnsmasq|grep ipset确认支持ipset
然后到 https://github.com/felixonmars/dnsmasq-china-list
把4个国内加速的conf拷贝出来放到/etc/dnsmasq.d目录

accelerated-domains.china.conf  
apple.china.conf  
bogus-nxdomain.china.conf  
google.china.conf  

然后拿到国内ip地址的范围,没用啊,只是记录一下,说不定将来有用。

wget -O- 'http://ftp.apnic.net/apnic/stats/apnic/delegated-apnic-latest' | grep ipv4 | grep CN | awk -F\| '{ printf("%s/%d\n", $4, 32-log($5)/log(2)) }' > /etc/chinadns_chnroute.txt  

拿到https://zohead.com/downloads/dnsmasq.tar.gz

解压放到/etc/dnsmasq.d目录下

最终/etc/dnsmasq.conf的内容:

domain-needed  
bogus-priv  
no-resolv  
no-poll  
conf-dir=/etc/dnsmasq.d  
address=/tms.can.cibntv.net/0.0.0.0  
server=114.114.114.114  
dhcp-range=192.168.2.50,192.168.2.100,72h  
dhcp-option=3,192.168.2.2  
dhcp-option=6,192.168.2.2  
cache-size=10000  
min-cache-ttl=1800  

最主要的就是dnsmasq.tar.gz的内容:

server=/12bet.com/127.0.0.1#5353  
ipset=/12bet.com/gfwlist  

很清晰,这里确定了12bet.com走本地5353端口进行dns查询,并且访问的包标记到ipset链gfwlist中,当然这里最好是用4321的dns,会符合就近解析原则

重启dnsmasq就ok了

七、准备iptable

ipset -N gfwlist iphash  
iptables -t nat -A PREROUTING -p tcp -m set –match-set gfwlist dst -j REDIRECT –to-port 1080  

这样一切就ok了。

配置一个提供 IPv6 tunnel over IPv4的OpenVPN服务器

$
0
0

先普及一下IPv6 地址。

IPv6 地址大小为 128 位。首选 IPv6 地址表示法为 x:x:x:x:x:x:x:x,其中每个 x 是地址的 8 个 16 位部分的十六进制值。IPv6 地址范围从 0000:0000:0000:0000:0000:0000:0000:0000 至 ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff。

除此首选格式之外,IPv6 地址还可以用其他两种短格式指定:

  • 省略前导零
    通过省略前导零指定 IPv6 地址。
    例如,IPv6 地址 1050:0000:0000:0000:0005:0600:300c:326b 可写作 1050:0:0:0:5:600:300c:326b。

  • 双冒号
    通过使用双冒号(::)替换一系列零来指定 IPv6 地址。
    例如,IPv6 地址 ff06:0:0:0:0:0:0:c3 可写作 ff06::c3。
    一个 IP 地址中只可使用一次双冒号。

在一般IPv6网络环境下,一个局域网的子网大小为/64,接口通过NDP协议获得自己的唯一IPv6地址(前64位为子网前缀,后64位一般由接口本身的MAC地址产生)

我们的场景:

  • 服务器的IPV4地址是 1.2.3.4
  • 服务器的IPV6地址是 aaaa:bbbb:cccc:dddd::/64
  • IPV4和IPV6的地址都在eth0上
  • VPN分配给客户端的IPV6地址是aaaa:bbbb:cccc:dddd:80::/112,使用的接口是tun0

配置过程如下: 首先修改/etc/sysctl.conf文件

net.ipv4.ip_forward=1  
    ...
net.ipv6.conf.all.forwarding=1  
net.ipv6.conf.all.proxy_ndp = 1  
    ...
net.ipv4.conf.all.accept_redirects = 0  
net.ipv6.conf.all.accept_redirects = 0  
    ...
net.ipv4.conf.all.send_redirects = 0  

接下来,可以先做iptable,使得openvpn server对包进行SNAT

iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE  
或者
iptables -t nat -A POSTROUTING -s 10.11.0.0/16 -j SNAT --to 172.16.8.1  

编辑/etc/openvpn/variables变量

# 客户端IP段前缀
# Tunnel subnet prefix
prefix=aaaa:bbbb:cccc:dddd:80:  
# netmask
prefixlen=112  

准备两个脚本,在与客户端建立连接和断开连接时执行,用来建立或者取消NDP proxy rules。 /etc/openvpn/up.sh

#!/bin/sh

# Check client variables
if [ -z "$ifconfig_pool_remote_ip" ] || [ -z "$common_name" ]; then  
        echo "Missing environment variable."
        exit 1
fi

# Load server variables
. /etc/openvpn/variables

ipv6=""

ipp=$(echo "$ifconfig_pool_remote_ip" | cut -d. -f4)  
if ! [ "$ipp" -ge 2 -a "$ipp" -le 254 ] 2>/dev/null; then  
        echo "Invalid IPv4 part."
        exit 1
fi  
hexipp=$(printf '%x' $ipp)  
ipv6="$prefix$hexipp"

# Create proxy rule
/sbin/ip -6 neigh add proxy $ipv6 dev eth0

/etc/openvpn/down.sh

#!/bin/sh

# Check client variables
if [ -z "$ifconfig_pool_remote_ip" ] || [ -z "$common_name" ]; then  
        echo "Missing environment variable."
        exit 1
fi

# Load server variables
. /etc/openvpn/variables

ipv6=""

ipp=$(echo "$ifconfig_pool_remote_ip" | cut -d. -f4)  
if ! [ "$ipp" -ge 2 -a "$ipp" -le 254 ] 2>/dev/null; then  
        echo "Invalid IPv4 part."
        exit 1
fi  
hexipp=$(printf '%x' $ipp)  
ipv6="$prefix$hexipp" 

# Delete proxy rule
/sbin/ip -6 neigh del proxy $ipv6 dev eth0

这两个脚本权限应该都是755

服务器端server.conf中的相关配置:

# Run client-specific script on connection and disconnection
script-security 2  
client-connect "/etc/openvpn/up.sh"  
client-disconnect "/etc/openvpn/down.sh"

# Server mode and client subnets
server 10.8.0.0 255.255.255.0  
server-ipv6 aaaa:bbbb:cccc:dddd:80::/112  
topology subnet

# IPv6 routes
push "route-ipv6 aaaa:bbbb:cccc:dddd::/64"  
push "route-ipv6 2000::/3"  

客户端client.ovpn中的相关配置

remote 1.2.3.4 1194  

启动服务器端,然后再启动客户端连接 测试一下:

 traceroute6 ipv6.google.com

这样就ok了。

Freelancer任务之四squid查询用户浏览记录

$
0
0

这个需求也比较简单:

User Browsing Log for Open VPN server

简单说就是用户连到他的openvpn服务器,通过上面的squid代理来浏览其他网站,比较特别的是需要查看用户http和https的浏览记录。

squid做透明代理,这样就可以截取浏览记录并且提供加速了

服务器是Ubuntu,缺省安装的的squid是不支持SSL的,所以需要重新编译一个

安装依赖包:

sudo apt-get install build-essential fakeroot devscripts gawk gcc-multilib dpatch  
sudo apt-get build-dep squid3  
sudo apt-get build-dep openssl  
sudo apt-get install libssl-dev  
sudo apt-get source squid3  

下载到squid的源代码,以及ubuntu的修改包,解压并释放:

tar zxvf squid3_3.5.12.orig.tar.gz  
cd squid3-3.5.12  
tar xf ../squid3_3.5.12-1ubuntu7.5.debian.tar.xz  

修改参数增加对ssl的支持:

vi debian/rules  
Add --with-openssl --enable-ssl --enable-ssl-crtd under the DEB_CONFIGURE_EXTRA_FLAGS section.

DEB_CONFIGURE_EXTRA_FLAGS := BUILDCXXFLAGS="$(CXXFLAGS) $(LDFLAGS)" \  
...
                --with-default-user=proxy \
                --with-openssl \
                --enable-ssl \
                --enable-ssl-crtd
...

编译,会生成7个deb包

debuild -us -uc -b  
cd ..  
ls -1 *.deb  
squid3_3.5.12-1ubuntu7.5_all.deb  
squid_3.5.12-1ubuntu7.5_amd64.deb  
squid-cgi_3.5.12-1ubuntu7.5_amd64.deb  
squidclient_3.5.12-1ubuntu7.5_amd64.deb  
squid-common_3.5.12-1ubuntu7.5_all.deb  
squid-dbg_3.5.12-1ubuntu7.5_amd64.deb  
squid-purge_3.5.12-1ubuntu7.5_amd64.deb  

安装,先装语言包,然后安装三个自己生成的包

sudo apt-get install squid-langpack  
sudo dpkg -i squid_3.5.12-1ubuntu7.5_amd64.deb squid-common_3.5.12-1ubuntu7.5_all.deb squid-dbg_3.5.12-1ubuntu7.5_amd64.deb  

检查一下新的squid是否支持ssl了

squid -v|grep ssl  
configure options:  '--build=x86_64-linux-gnu' '--prefix=/usr' '--includedir=${prefix}/include' '--mandir=${prefix}/share/man' '--infodir=${prefix}/share/info' '--sysconfdir=/etc' '--localstatedir=/var' '--libexecdir=${prefix}/lib/squid3' '--srcdir=.' '--disable-maintainer-mode' '--disable-dependency-tracking' '--disable-silent-rules' 'BUILDCXXFLAGS=-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' '--datadir=/usr/share/squid' '--sysconfdir=/etc/squid' '--libexecdir=/usr/lib/squid' '--mandir=/usr/share/man' '--enable-inline' '--disable-arch-native' '--enable-async-io=8' '--enable-storeio=ufs,aufs,diskd,rock' '--enable-removal-policies=lru,heap' '--enable-delay-pools' '--enable-cache-digests' '--enable-icap-client' '--enable-follow-x-forwarded-for' '--enable-auth-basic=DB,fake,getpwnam,LDAP,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB' '--enable-auth-digest=file,LDAP' '--enable-auth-negotiate=kerberos,wrapper' '--enable-auth-ntlm=fake,smb_lm' '--enable-external-acl-helpers=file_userip,kerberos_ldap_group,LDAP_group,session,SQL_session,unix_group,wbinfo_group' '--enable-url-rewrite-helpers=fake' '--enable-eui' '--enable-esi' '--enable-icmp' '--enable-zph-qos' '--enable-ecap' '--disable-translation' '--with-swapdir=/var/spool/squid' '--with-logdir=/var/log/squid' '--with-pidfile=/var/run/squid.pid' '--with-filedescriptors=65536' '--with-large-files' '--with-default-user=proxy' '--with-openssl' '--enable-ssl' '--enable-ssl-crtd' '--enable-build-info=Ubuntu linux' '--enable-linux-netfilter' 'build_alias=x86_64-linux-gnu' 'CFLAGS=-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wall' 'LDFLAGS=-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' 'CPPFLAGS=-Wdate-time -D_FORTIFY_SOURCE=2' 'CXXFLAGS=-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security'

cd /usr/lib/squid  
ls ssl_crtd  

Gen出ssl的证书和密钥并拷贝到正确的位置,更新ca-certificates:

openssl genrsa -out squid.key 2048

openssl req -new -key squid.key -out squid.csr  
You are about to be asked to enter information that will be incorporated  
into your certificate request.  
What you are about to enter is what is called a Distinguished Name or a DN.  
There are quite a few fields but you can leave some blank  
For some fields there will be a default value,  
If you enter '.', the field will be left blank.  
-----
Country Name (2 letter code) [AU]:CN  
State or Province Name (full name) [Some-State]:Beijing  
Locality Name (eg, city) []:Beijing  
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Rendoumi.com  
Organizational Unit Name (eg, section) []:Rendoumi.com  
Common Name (e.g. server FQDN or YOUR name) []:159.89.116.192  
Email Address []:

Please enter the following 'extra' attributes  
to be sent with your certificate request  
A challenge password []:  
An optional company name []:


openssl x509 -req -days 3650 -in squid.csr -signkey squid.key -out squid.crt  
Signature ok  
subject=/C=CN/ST=Beijing/L=Beijing/O=Rendoumi.com/OU=Rendoumi.com/CN=159.89.116.192  
Getting Private key

sudo cp squid.crt /usr/local/share/ca-certificates

sudo /usr/sbin/update-ca-certificates  
Updating certificates in /etc/ssl/certs...  
1 added, 0 removed; done.  
Running hooks in /etc/ca-certificates/update.d...  
done.

sudo cp squid.pem /etc/squid  

修改/etc/squid.conf配置文件

cd /etc/squid  
cat squid.conf|grep -v ^# | grep -v ^$

sudo vi /etc/squid/squid.conf  
----------------------------------------
acl SSL_ports port 443  
acl Safe_ports port 80          # http  
acl Safe_ports port 21          # ftp  
acl Safe_ports port 443         # https  
acl Safe_ports port 70          # gopher  
acl Safe_ports port 210         # wais  
acl Safe_ports port 1025-65535  # unregistered ports  
acl Safe_ports port 280         # http-mgmt  
acl Safe_ports port 488         # gss-http  
acl Safe_ports port 591         # filemaker  
acl Safe_ports port 777         # multiling http  
acl CONNECT method CONNECT  
acl localnet src 10.8.0.0/16

http_access deny !Safe_ports  
http_access deny CONNECT !SSL_ports

http_access allow localhost manager  
http_access deny manager

http_access allow localhost  
http_access allow localnet  
http_access deny all

coredump_dir /var/spool/squid

refresh_pattern ^ftp:           1440    20%     10080  
refresh_pattern ^gopher:        1440    0%      1440  
refresh_pattern -i (/cgi-bin/|\?) 0     0%      0  
refresh_pattern (Release|Packages(.gz)*)$      0       20%     2880  
# example lin deb packages
#refresh_pattern (\.deb|\.udeb)$   129600 100% 129600
refresh_pattern .               0       20%     4320

shutdown_lifetime 3

http_port  3128 intercept  
https_port 3129 intercept ssl-bump  generate-host-certificates=on version=1 options=NO_SSLv2,NO_SSLv3,SINGLE_DH_USE dynamic_cert_mem_cache_size=4MB cert=/etc/squid/squid.pem

always_direct allow all  
ssl_bump none localhost  
ssl_bump server-first all  
sslproxy_cert_error allow all  
sslproxy_flags DONT_VERIFY_PEER  
sslcrtd_program /usr/lib/squid/ssl_crtd -s /var/lib/ssl_db -M 4MB  
sslcrtd_children 8 startup=1 idle=1  
----------------------------------------

初始化ssl_db

sudo /usr/lib/squid/ssl_crtd -c -s /var/lib/ssl_db/  
chown -R proxy /var/lib/ssl_db  

重启squid

sudo systemctl restart squid.service  

特别的一点,雇主写了巨多的ufw的规则,导致IPTABLE爆满,居然无法手动清除所有的规则,这也是第一次遇到这样的,只能写脚本清除,方法如下:

vi cl.txt  
-----------------------
# Empty the entire filter table
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
COMMIT  
-----------------------

sudo iptables-restore < cl  

最后修改IPTABLES,把80和443的请求都送到squid去

sudo vi /etc/rc.local  
iptables -t nat -A PREROUTING -p tcp -s 10.8.0.0/24 --dport 80 -j REDIRECT --to-ports 3128  
iptables -t nat -A PREROUTING -p tcp -s 10.8.0.0/24 --dport 443 -j REDIRECT --to-ports 3129  
iptables -t nat -A POSTROUTING -s 10.8.0.0/24 ! -d 10.8.0.0/24 -j SNAT --to 159.89.116.192  

用代理访问,验证一下:

sudo tail -f /var/log/squid/access.log  

这样做有个问题,就是https实际是被劫持代理了,所以客户端会弹出个是否信任证书,除非在每个客户端中预埋,这样才能解决。

That is all.


Freelancer任务之五多线路聚合vpn

$
0
0

这个任务很有意思

链接地址: https://www.freelancer.com/projects/software-architecture/Bonded-connection-for-video-streaming

任务描述:

we need a set of vpn server / client programmed for embedded linux (or windows)  
to bond multiple 4g lte modems or wifi connectios and stitch them back together  
on server side to stream video feeds. the connection must be stable and have  
the maximum available bandwidth with no drop in some connection drops.  
simillar to service called SPEEDIFY (but it doesn't work well)

this can be also achieved by splitting video packets and send them through  
 different links and stitch the video packets back on the server side.

简单说,很可能它这边是个嵌入式系统,树莓派、nanopi之流的,接了4G的无线上网卡,想去聚合链路上传流媒体。

解决方案也很简单,直击痛点。

http://vrayo.com/how-to-set-up-a-bonding-vpn-connection-in-linux/

这个任务还在open bid期间,$250 - $750。

大概2018年4月18日截止,有兴趣的人可以去试试,我是没空了,任务缠身。

Fabric 1.0在Centos 7上面的安装

$
0
0

同事突然雅兴大发,说要研究研究区块链。给了个链接:

http://www.voidcn.com/article/p-szuznezg-bqr.html

首先说明,这里的装法是过时的,网上类似的文章一抓一大把,到最后是肯定装不成功的

会卡在最后一步network-stup.sh up,报错

2018-04-25 03:09:49.100 UTC [grpc] Printf -> DEBU 003 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: lookup orderer.example.com on 127.0.0.11:53: no such host"; Reconnecting to {orderer.example.com:7050 <nil>}  
Error: Error connecting due to  rpc error: code = Unavailable desc = grpc: the connection is unavailable  

错误如上,咋一看好像是什么host,docker ps -a才发现

e170bd588a44        hyperledger/fabric-orderer     "orderer"                3 minutes ago       Exited (2) 3 minutes ago  

是orderer这个容器启动失败了, docker logs e170bd588a44查看:

2018-04-25 03:08:48.253 UTC [orderer/multichain] newLedgerResources -> CRIT 067 Error creating configtx manager and handlers: Error deserializing key Capabilities for group /Channel: Unexpected key Capabilities  
panic: Error creating configtx manager and handlers: Error deserializing key Capabilities for group /Channel: Unexpected key Capabilities  
goroutine 1 [running]:  
panic(0xb31bc0, 0xc42020f160)  
    /opt/go/src/runtime/panic.go:500 +0x1a1
github.com/hyperledger/fabric/vendor/github.com/op/go-logging.(*Logger).Panicf(0xc420212540, 0xc71091, 0x30, 0xc42020f0b0, 0x1, 0x1)  
    /opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/op/go-logging/logger.go:194 +0x127
github.com/hyperledger/fabric/orderer/multichain.(*multiLedger).newLedgerResources(0xc420374730, 0xc42035d350, 0xc42035d350)  
    /opt/gopath/src/github.com/hyperledger/fabric/orderer/multichain/manager.go:164 +0x393
github.com/hyperledger/fabric/orderer/multichain.NewManagerImpl(0x122a2a0, 0xc420388100, 0xc42035ce40, 0x1226ea0, 0x126ee88, 0x0, 0x0)  
    /opt/gopath/src/github.com/hyperledger/fabric/orderer/multichain/manager.go:114 +0x23b
main.initializeMultiChainManager(0xc4201df440, 0x1226ea0, 0x126ee88, 0xc42020ea90, 0x1)  
    /opt/gopath/src/github.com/hyperledger/fabric/orderer/main.go:219 +0x27a
main.main()  
    /opt/gopath/src/github.com/hyperledger/fabric/orderer/main.go:75 +0x392

key的兼容性,shit,整整调试了两天,才搞定

完整安装步骤如下:

一、弄好源

yum install -y wget  
rm -rf /etc/yum.repos.d/*  
wget -q http://mirrors.163.com/.help/CentOS7-Base-163.repo -O /etc/yum.repos.d/CentOS7-Base-163.repo  

二、校准时间

yum install -y ntp ntpdate ntp-doc  
ntpdate 0.us.pool.ntp.org  
hwclock --systohc  
systemctl enable ntpd.service  
systemctl start ntpd.service  

三、升级核心,安装开发包

rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm  
yum install lsof deltarpm -y  
yum -y --enablerepo=elrepo-kernel install kernel-ml # will install 3.11.latest,stable, mainline

yum groupinstall -y "development tools"  
grub2-mkconfig -o /boot/grub2/grub.cfg  
grub2-set-default 0  
reboot  

以上三步其实也可以省略,嘿嘿,掉坑了吧

四、装Docker和docker-compose

yum install -y docker  
systemctl enable docker  
systemctl start docker  
curl -L https://get.daocloud.io/docker/compose/releases/download/1.16.1/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose  
chmod +x /usr/local/bin/docker-compose  

五、安装go,1.7.5的版本,因为我看了Fabric的ci,是用的1.7.5,我们就对准好了

wget https://storage.googleapis.com/golang/go1.7.5.linux-amd64.tar.gz  
tar xf go1.7.5.linux-amd64.tar.gz  
mv go /usr/local/  
mkdir -p /root/golang  
cat<<EOF>>/etc/profile  
export GOPATH=/root/golang  
export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin  
EOF  
ln -s /root/golang /opt/gopath  
source /etc/profile  

六、clone Fabric

yum install git -y  
yum update nss curl  
git clone -b release-1.0 https://github.com/hyperledger/fabric  

网上的错误都在于此,都是去clone了master分支的,然后checkout -b v1.0.0,这样是不对的,必须去clone这个独立的release-1.0分支。 升级nss和curl的原因是我的Centos 7是CentOS Linux release 7.1.1503 (Core),够古老,不升级git会报错。

七、下载docker镜像

cd /root/golang/src/github.com/hyperledger/fabric/examples/e2e_cli  
source download-dockerimages.sh -c x86_64-1.0.6 -f x86_64-1.0.6  

注意这里,实际release-1.0的版本号是1.0.6,所以下1.0.6的,最好用代理下,或者什么阿里或者daocloud的加速器下,方法如下:

启用docker官方中国区加速器:
vim /etc/sysconfig/docker  
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false  --registry-mirror=https://registry.docker-cn.com'  
systemctl restart docker  

八、搞定

cd /root/golang/src/github.com/hyperledger/fabric/examples/e2e_cli

./network_setup.sh up

Freelancer任务之六Compile an ipk file on Lede (OpenWRT)

$
0
0

这是一次失败的任务,即使再来一次,依然会失败,因为无法验证,真够shit的,扣了我10$的手续费。

记录一下,以供后效。

任务如下:

Job would be to compile an ipk file of SHC that would work with LEDE OS (openwrt) and the processor used in our system - will provide details

SHC can be downloaded here: http://www.datsi.fi.upm.es/~frosal/

I think you must have a 64 bit system to use the SDK to compile the file  

翻译一下:在OpenWRT平台下编译一个SHC,并且能工作。

完全步骤如下:

How to compile SHC on LEDE:

Tested build environment:

OS: Ubuntu 14.04.5 LTS  
CPU: ARMv7 Processor rev 5 (v7l)


Before you begin, check your system is updated, i.e.

sudo apt-get update  
sudo apt-get upgrade  
sudo apt-get autoremove


Step-by-step manual:  
Note: perform all steps as regular (non-root) user. User must be in sudo group.

1. Update your sources

sudo apt-get update

2. Install nesessary packages:

sudo apt-get install g++ libncurses5-dev zlib1g-dev bison flex unzip autoconf gawk make gettext gcc binutils patch bzip2 libz-dev asciidoc subversion sphinxsearch libtool sphinx-common libssl-dev libssl0.9.8

3. Get latest LEDE source from git repository  
   We know our CPU is armv7, it belongs to arm64, so go to http://downloads.lede-project.org/releases , just download sdk.

wget http://downloads.lede-project.org/releases/17.01.4/targets/arm64/generic/lede-sdk-17.01.4-arm64_gcc-5.4.0_musl-1.1.16.Linux-x86_64.tar.xz


4. No Need to Update and install all LEDE packages  
   We just want to compile shc, not other packages , so don't update.


5. Run 'make menuconfig' (Just Save and Exit)


6. Get SHC sources to LEDE package tree  
 (we are and shc-3.8.9b.tgz is in source directory)

wget http://www.datsi.fi.upm.es/~frosal/sources/shc-3.8.9b.tgz  
mkdir -p package/shc/src  
tar xvf shc-3.8.9b.tgz -C package/shc/src --strip-components 1

7. Make ipk Makefile

vi package/shc/Makefile  
##############################################
# OpenWrt Makefile for shc program
#
#
# Most of the variables used here are defined in
# the include directives below. We just need to 
# specify a basic description of the package, 
# where to build our program, where to find 
# the source files, and where to install the 
# compiled program on the router. 
# 
# Be very careful of spacing in this file.
# Indents should be tabs, not spaces, and 
# there should be no trailing whitespace in
# lines that are not commented.
# 
##############################################

include $(TOPDIR)/rules.mk

# Name and release number of this package

PKG_NAME:=shc  
PKG_VERSION:=3.8.9b  
PKG_MAINTAINER:=Francisco, Rosales, <frosal@fi.upm.es>


# This specifies the directory where we're going to build the program.  
# The root build directory, $(BUILD_DIR), is by default the build_mipsel 
# directory in your OpenWrt SDK directory
PKG_BUILD_DIR := $(BUILD_DIR)/$(PKG_NAME)


include $(INCLUDE_DIR)/package.mk



# Specify package information for this program. 
# The variables defined here should be self explanatory.
# If you are running Kamikaze, delete the DESCRIPTION 
# variable below and uncomment the Kamikaze define
# directive for the description below
define Package/$(PKG_NAME)  
    SECTION:=utils
    CATEGORY:=Utilities
    TITLE:= shc ---- This tool generates a stripped binary executable version of the script specified at command line.
    URL:=http://www.datsi.fi.upm.es/~frosal
endef


# Uncomment portion below for Kamikaze and delete DESCRIPTION variable above
define Package/$(PKG_NAME)/description  
    shc ---- This tool generates a stripped binary executable version
        of the script specified at command line."
endef

# Specify what needs to be done to prepare for building the package.
# In our case, we need to copy the source files to the build directory.
# This is NOT the default.  The default uses the PKG_SOURCE_URL and the
# PKG_SOURCE which is not defined here to download the source from the web.
# In order to just build a simple program that we have just written, it is
# much easier to do it this way.
define Build/Prepare  
    mkdir -p $(PKG_BUILD_DIR)
    $(CP) ./src/* $(PKG_BUILD_DIR)/
endef


# We do not need to define Build/Configure or Build/Compile directives
# The defaults are appropriate for compiling a simple program such as this one

# Specify where and how to install the program. Since we only have one file, 
# the helloworld executable, install it by copying it to the /bin directory on
# the router. The $(1) variable represents the root directory on the router running 
# OpenWrt. The $(INSTALL_DIR) variable contains a command to prepare the install 
# directory if it does not already exist.  Likewise $(INSTALL_BIN) contains the 
# command to copy the binary file from its current location (in our case the build
# directory) to the install directory.
define Package/$(PKG_NAME)/install  
    $(INSTALL_DIR) $(1)/bin
    $(INSTALL_BIN) $(PKG_BUILD_DIR)/shc $(1)/bin/
endef


# This line executes the necessary commands to compile our program.
# The above define directives specify all the information needed, but this
# line calls BuildPackage which in turn actually uses this information to
# build a package.
$(eval $(call BuildPackage,$(PKG_NAME)))


8. Compile shc ipk without errors.

make package/shc/compile V=99

9. Building process complete witout errors.  
Now we have :  
binary packages directory: source/bin/packages/aarch64_armv8-a/  
[SHC_BIN] = ./bin/packages/aarch64_armv8-a/base/shc_3.8.9b_aarch64_armv8-a.ipk

10. Copy and install SHC .ipk to LEDE device.

scp shc_3.8.9b_aarch64_armv8-a.ipk root@<LEDE device IP address or name>:/tmp/  
ssh root@<LEDE device IP address or name> #IP usually 192.168.1.1  
opkg install shc_3.8.9b_aarch64_armv8-a.ipk

11. Create test script and compile it to execute. (in LEDE shell)  
ssh root@<LEDE device IP address or name> #IP usually 192.168.1.1

vi /tmp/1.sh  
#!/bin/sh
echo "hahahaha"

shc -v -f /tmp/1.sh  
/tmp/1.sh.x

这里面有几个注意点,一个是网上有很多教程,上去就是

./scripts/feeds update -a
./scripts/feeds install -a

这个千万不要,这两条命令是把所有的库和软件都给下载下来的,如果再编译,基本半天过去了,浪费时间生命。我们的目标只是要编译个shc,其他都不管。

第二点就是那个Makefile,其实编译shc就一句话,但是我们要放到LEDE里就必须有这个Makefile.

这样就顺利编译出来个shc3.8.9baarch64_armv8-a.ipk,但是雇主反映拷上去不能用,我去!

这才又仔细研究shc,这个软件其实是个混淆的软件,把shell脚本变成*.c,然后再用gcc编译成2进制文件。

但是依OpenWRT的性格,一般根本装不下gcc的,所以必须外挂U盘装GCC上去,装法如下:

http://www.th7.cn/Program/cp/201708/1224085.shtml

验证方法也比较古怪,首先写好shell,然后跑到openwrt上用shc进行混淆,然后把混淆过的.c文件拷出来,再用gcc编译成bin,然后再拷回openwrt运行验证。

再附上单独用gcc编译的方法:

export STAGING_DIR=~/LEDE-IPQ40XX$/staging_dir  
cd ~/LEDE-IPQ40XX$  
./staging_dir/toolchain-arm_cortex-a7+neon-vfpv4_gcc-7.3.0_musl_eabi/bin/arm-openwrt-linux-muslgnueabi-gcc  script.sh.x.c -o script.sh.x

shit啊,其实我编译出来的shc已经是可以运行并混淆出*.c的,但是不知道为什么arm-gcc编译出的bin文件在openwrt上无法运行,更悲剧的是我没有他那个平台,无法自己装个gcc看看到底是怎么回事

悲惨世界啊,被扣了10$手续费。

最后问了雇主,他又找了一个家伙解决了问题,给出的方法如下:

9. Compile shell script

${HOME}/shc.sh -r ${HOME}/openwrt-sdk -n hello-world -s ${HOME}/hello-world.sh.x.c;
ls -lh ${HOME}/openwrt-sdk/bin/packages/arm_cortex-a7_neon-vfpv4/base/script_*;  
ls -lh ${HOME}/openwrt-sdk/build_dir/target-arm_cortex-a7+neon-vfpv4_musl_eabi/hello-world-0.0.1/hello-world;

10.  Install the script package on OpenWrt.

scp ${HOME}/openwrt/bin/packages/arm_cortex-a7_neon-vfpv4/base/script_* root@192.168.1.1:/tmp/;  
ssh root@192.168.1.1;  
opkg install /tmp/script_*;  
exit;  

看明白了吧,我也无法验证,靠有心人去试试了。记录一下。

Ceph装好RGW如何配置s3cmd进行访问

$
0
0

Ceph的安装就不多说了,本来是只想用rbd进行块设备访问配搭kvm的。

同事也搭好了rgw,正好之前用过amazon的s3,s3cmd用起来十分爽啊,那么怎么用s3cmd对ceph进行访问呢?

过程如下:

首先Ceph是luminous,然后用ceph-deploy安装,安装用户是cephuser

那么登录后用cephuser的身份sudo进行操作,直接用root是不行的,注意

sudo radosgw-admin user create --uid="bajie" --display-name="Ba Jie"  
{
    "user_id": "bajie",
    "display_name": "Ba Jie",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [],
    "keys": [
        {
            "user": "bajie",
            "access_key": "7SIW23M9A411SY2X3H8L",
            "secret_key": "CTSMz1UqVj4Ft4IGe4ibVFRwD9qN6iIjnGIxe9Ns"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw"
}

ok,其实有用的就2行

            "access_key": "7SIW23M9A411SY2X3H8L"
            "secret_key": "CTSMz1UqVj4Ft4IGe4ibVFRwD9qN6iIjnGIxe9Ns"

然后就装s3cmd

sudo yum install s3cmd  

接下来,看看我们的rgw跑在哪台机器的哪个端口上:

ps ax|grep radosgw  
 28861 ?        Ssl    6:36 /usr/bin/radosgw -f --cluster ceph --name client.rgw.vis-16-10-81 --setuser ceph --setgroup ceph

lsof -p 28861|grep LISTEN  
radosgw 28861 ceph   27u  IPv4              99863      0t0       TCP *:7480 (LISTEN)  

跑在172.16.10.81:7480端口上。

我们手动生成.s3cfg文件

cat<<EOF>~/.s3cfg  
[default]
access_key = 7SIW23M9A411SY2X3H8L  
host_base = 172.16.10.81:7480  
host_bucket = 172.16.10.81:7480/%(bucket)  
secret_key = CTSMz1UqVj4Ft4IGe4ibVFRwD9qN6iIjnGIxe9Ns  
cloudfront_host = 172.16.10.81:7480  
use_https = False  
EOF  

依样画葫芦即可。

然后运行s3cmd

#建桶
s3cmd mb s3://iso  
#放文件进筒
s3cmd put ubuntu-16.04-desktop-amd64.iso s3://iso/  
#从桶里把文件取出来
s3cmd get s3://iso/ubuntu-16.04-desktop-amd64.iso  

然后如果想编程,就装boto库

sudo yum install -y python-boto  

写个测试程序s3.py

import boto.s3.connection

access_key = '7SIW23M9A411SY2X3H8L'  
secret_key = 'CTSMz1UqVj4Ft4IGe4ibVFRwD9qN6iIjnGIxe9Ns'  
conn = boto.connect_s3(  
        aws_access_key_id=access_key,
        aws_secret_access_key=secret_key,
        host='172.16.10.81', port=7480,
        is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat(),
       )

bucket = conn.create_bucket('my-new-bucket')  
for bucket in conn.get_all_buckets():  
    print "{name} {created}".format(
        name=bucket.name,
        created=bucket.creation_date,
    )

运行s3.py,然后s3cmd ls一下就可以了:

s3cmd ls  
2018-04-28 05:29  s3://my-new-bucket  

Tomcat的日志记录

$
0
0

我们tomcat记录的日志内容为:

pattern="%a|%A|%T|%{X-Forwarded-For}i|%l|%u|%t|%r|%s|%b|%{Referer}i|%{User-Agent}i " resolveHosts="false"/>  

都是什么意思呢?

  • %a - Remote IP address
  • %A - Local IP address
  • %b - Bytes sent, excluding HTTP headers, or '-' if zero
  • %B - Bytes sent, excluding HTTP headers
  • %h - Remote host name (or IP address if resolveHosts is false)
  • %H - Request protocol
  • %l - Remote logical username from identd (always returns '-')
  • %m - Request method (GET, POST, etc.)
  • %p - Local port on which this request was received
  • %q - Query string (prepended with a '?' if it exists)
  • %r - First line of the request (method and request URI)
  • %s - HTTP status code of the response
  • %S - User session ID
  • %t - Date and time, in Common Log Format
  • %u - Remote user that was authenticated (if any), else '-'
  • %U - Requested URL path
  • %v - Local server name
  • %D - Time taken to process the request, in millis
  • %T - Time taken to process the request, in seconds
  • %I - Current request thread name (can compare later with stacktraces)

另外,还可以将request请求的查询参数、session会话变量值、cookie值或HTTP请求/响应头内容的变量值等内容写入到日志文件。

它仿照了apache的语法:

  • %{XXX}i xxx代表传入的头(HTTP Request)
  • %{XXX}o xxx代表传出的响应头(Http Resonse)
  • %{XXX}c xxx代表特定的Cookie名
  • %{XXX}r xxx代表ServletRequest属性名
  • %{XXX}s xxx代表HttpSession中的属性名
"%a|%A|%T|%{X-Forwarded-For}i|%l|%u|%t|%r|%s|%b|%{Referer}i|%{User-Agent}i "

翻译过来就是

"远程地址|本地地址|相应时间|X-Forwarded-For|远程用户|远程认证用户|时间|第一行|回应code|发送字节|Referer|User-Agent "

例子:

10.11.9.190|10.11.10.13|0.153|111.199.189.182|-|-|[04/May/2018:21:31:02 +0800]|GET /cms/rest.htm?v=1.0 HTTP/1.0|200|48939|-|Dalvik/2.1.0 (Linux; U; Android 7.0; PIC-AL00 Build/HUAWEIPIC-AL00)  

这样就比较全了,本机IP/外来IP,如果前面是nginx代理的,ip和内容以及user-agent都能记录下来,以便之后查找和处理。

Viewing all 290 articles
Browse latest View live