tpws: multi thread resolver

This commit is contained in:
bol-van 2024-04-02 18:14:03 +03:00
parent 81357a41bf
commit 1b01b62b34
22 changed files with 451 additions and 80 deletions

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -270,4 +270,7 @@ tpws: incompatible oob parameter change. it doesn't take oob byte anymore. inste
blockcheck: quick mode, strategy order optimizations, QUIC protocol support
nfqws: syndata desync mode
<20>
v56
tpws: mss fooling
tpws: multi thread resolver. eliminates blocks related to hostname resolve.

View File

@ -561,6 +561,7 @@ tpws is transparent proxy.
--remote-sndbuf=<bytes> ; SO_SNDBUF for remote legs
--skip-nodelay ; do not set TCP_NODELAY for outgoing connections. incompatible with split.
--no-resolve ; disable socks5 remote dns
--resolver-threads=<int> ; number of resolver worker threads
--maxconn=<max_connections> ; max number of local legs
--maxfiles=<max_open_files> ; max file descriptors (setrlimit). min requirement is (X*connections+16), where X=6 in tcp proxy mode, X=4 in tampering mode.
; its worth to make a reserve with 1.5 multiplier. by default maxfiles is (X*connections)*1.5+16
@ -646,12 +647,10 @@ It's possible to bind to any nonexistent address in transparent mode but in sock
In socks proxy mode no additional system privileges are required. Connections to local IPs of the system where tpws runs are prohibited.
tpws supports remote dns resolving (curl : `--socks5-hostname` firefox : `socks_remote_dns=true`) , but does it in blocking mode.
tpws uses async sockets for all activity but resolving can break this model.
if tpws serves many clients it can cause trouble. also DoS attack is possible against tpws.
if remote resolving causes trouble configure clients to use local name resolution and use
`--no-resolve` option on tpws side.
tpws uses async sockets for all activities. Domain names are resolved in multi threaded pool.
Resolving does not freeze other connections. But if there're too many requests resolving delays may increase.
Number of resolver threads is choosen automatically proportinally to `--maxconn` and can be override using `--resolver-threads`.
To disable hostname resolve use `--no-resolve` option.
`--disorder` is an additional flag to any split option.
It tries to simulate `--disorder2` option of `nfqws` using standard socket API without the need of additional privileges.
@ -670,7 +669,8 @@ Use of `--tlsrec` without filters is discouraged.
Server replies with it's own MSS in SYN,ACK packet. Usually servers lower their packet sizes but they still don't
fit to supplied MSS. The greater MSS client sets the bigger server's packets will be.
If it's enough to split TLS 1.2 ServerHello, it may fool DPI that checks certificate domain name.
This scheme may significantly lower speed. Hostlist and TLS version filters are not possible.
This scheme may significantly lower speed. Hostlist filter is possible only in socks mode if client uses remote resolving (firefox `network.proxy.socks_remote_dns`).
TLS version filters are not possible.
`--mss-pf` sets port filter for MSS. Use `mss-pf=443` to apply MSS only for https.
Likely not required for TLS1.3. If TLS1.3 is negotiable then MSS make things only worse.
Use only if nothing better is available. Works only in Linux, not BSD or MacOS.

View File

@ -1,4 +1,4 @@
zapret v.55
zapret v.56
English
-------
@ -642,6 +642,7 @@ tpws - это transparent proxy.
--bind-wait-only ; подождать все бинды и выйти. результат 0 в случае успеха, иначе не 0.
--socks ; вместо прозрачного прокси реализовать socks4/5 proxy
--no-resolve ; запретить ресолвинг имен через socks5
--resolve-threads ; количество потоков ресолвера
--port=<port> ; на каком порту слушать
--maxconn=<max_connections> ; максимальное количество соединений от клиентов к прокси
--maxfiles=<max_open_files> ; макс количество файловых дескрипторов (setrlimit). мин требование (X*connections+16), где X=6 в tcp proxy mode, X=4 в режиме тамперинга.
@ -756,13 +757,13 @@ tpws можно использовать на мобильном устройс
Поддерживаются версии socks 4 и 5 без авторизации. Версия протокола распознается автоматически.
Подключения к IP того же устройства, на котором работает tpws, включая localhost, запрещены.
socks5 позволяет удаленно ресолвить хосты (curl : --socks5-hostname firefox : socks_remote_dns=true).
tpws поддерживает эту возможность, однако используется блокирующий ресолвинг. Пока система
ресолвит хост (это может занять секунды), вся активность останавливается.
tpws полностью работает на асинхронных сокетах, но ресолвинг может попортить эту модель.
С ним возможны атаки DoS на tpws. Если tpws обслуживает множество клиентов, то из-за частого
ресолвинга качество обслуживания может существенно ухудшиться.
Если удаленный ресолвинг создает проблемы, настройте клиенты на локальный ресолвинг, включите опцию
--no-resolve на стороне tpws.
tpws поддерживает эту возможность асинхронно, не блокируя процессинг других соединений, используя
многопоточный пул ресолверов. Количество потоков определяется автоматически в зависимости от "--maxconn",
но можно задать и вручную через параметр "--resolver-threads".
Запрос к socks выставляется на паузу, пока домен не будет преобразован в ip адрес в одном из потоков
ресолвера. Ожидание может быть более длинным, если все потоки заняты.
Если задан параметр "--no-resolve", то подключения по именам хостов запрещаются, а пул ресолверов не создается.
Тем самым экономятся ресурсы.
Параметр --hostpad=<bytes> добавляет паддинг-хедеров перед Host: на указанное количество байтов.
Если размер <bytes> слишком большой, то идет разбивка на разные хедеры по 2K.
@ -803,7 +804,10 @@ tpws полностью работает на асинхронных сокет
шлет сервер. На TLS 1.2 если сервер разбил заброс так, чтобы домен из сертификата не попал в первый пакет,
это может обмануть DPI, секущий ответ сервера.
Схема может значительно снизить скорость и сработать не на всех сайтах.
Несовместимо с фильтром по hostlist. Невозможен фильтр по версии TLS.
С фильтром по hostlist совместимо только в режиме socks при включенном удаленном ресолвинге хостов.
(firefox network.proxy.socks_remote_dns). Это единственный вариант, когда tpws может узнать имя хоста
еще на этапе установления соединения.
Невозможен фильтр по версии TLS.
Взамен имеется фильтр по портам --mss-pf. --mss-pf=443 применяет дурение только к https.
Применяя данную опцию к сайтам TLS1.3, если броузер тоже поддерживает TLS1.3, то вы делаете только хуже.
Но нет способа автоматически узнать когда надо применять, когда нет, поскольку MSS идет только в

View File

@ -44,7 +44,7 @@ struct params_s
uid_t uid;
gid_t gid;
bool daemon;
int maxconn,maxfiles,max_orphan_time;
int maxconn,resolver_threads,maxfiles,max_orphan_time;
int local_rcvbuf,local_sndbuf,remote_rcvbuf,remote_sndbuf;
bool tamper; // any tamper option is set

231
tpws/resolver.c Normal file
View File

@ -0,0 +1,231 @@
#define _GNU_SOURCE
#include "resolver.h"
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <semaphore.h>
#include <fcntl.h>
#include <pthread.h>
#include <signal.h>
#include <sys/syscall.h>
#include <errno.h>
#include <unistd.h>
#define SIG_BREAK SIGUSR1
#ifdef __APPLE__
static const char *sem_name="/tpws_resolver";
#endif
TAILQ_HEAD(resolve_tailhead, resolve_item);
typedef struct
{
int fd_signal_pipe;
sem_t *sem;
#ifndef __APPLE__
sem_t _sem;
#endif
struct resolve_tailhead resolve_list;
pthread_mutex_t resolve_list_lock;
int threads;
pthread_t *thread;
bool bInit, bStop;
} t_resolver;
static t_resolver resolver = { .bInit = false };
#define rlist_lock pthread_mutex_lock(&resolver.resolve_list_lock)
#define rlist_unlock pthread_mutex_unlock(&resolver.resolve_list_lock)
int resolver_thread_count(void)
{
return resolver.bInit ? resolver.threads : 0;
}
static void *resolver_thread(void *arg)
{
int r;
//printf("resolver_thread %d start\n",syscall(SYS_gettid));
for(;;)
{
if (resolver.bStop) break;
r = sem_wait(resolver.sem);
if (resolver.bStop) break;
if (r)
{
if (errno!=EINTR)
{
perror("sem_wait (resolver_thread)");
break; // fatal err
}
}
else
{
struct resolve_item *ri;
ssize_t wr;
rlist_lock;
ri = TAILQ_FIRST(&resolver.resolve_list);
if (ri) TAILQ_REMOVE(&resolver.resolve_list, ri, next);
rlist_unlock;
if (ri)
{
struct addrinfo *ai,hints;
char sport[6];
//printf("THREAD %d GOT JOB %s\n", syscall(SYS_gettid), ri->dom);
snprintf(sport,sizeof(sport),"%u",ri->port);
memset(&hints, 0, sizeof(struct addrinfo));
hints.ai_socktype = SOCK_STREAM;
ri->ga_res = getaddrinfo(ri->dom,sport,&hints,&ai);
if (!ri->ga_res)
{
memcpy(&ri->ss, ai->ai_addr, ai->ai_addrlen);
freeaddrinfo(ai);
}
//printf("THREAD %d END JOB %s FIRST=%p\n", syscall(SYS_gettid), ri->dom, TAILQ_FIRST(&resolver.resolve_list));
wr = write(resolver.fd_signal_pipe,&ri,sizeof(void*));
if (wr<0)
{
free(ri);
perror("write resolve_pipe");
}
else if (wr!=sizeof(void*))
{
// partial pointer write is FATAL. in any case it will cause pointer corruption and coredump
free(ri);
fprintf(stderr,"write resolve_pipe : not full write\n");
exit(1000);
}
}
}
}
//printf("resolver_thread %d exit\n",syscall(SYS_gettid));
return NULL;
}
static void sigbreak(int sig)
{
}
bool resolver_init(int threads, int fd_signal_pipe)
{
int t;
struct sigaction action;
if (threads<1 || resolver.bInit) return false;
memset(&resolver,0,sizeof(resolver));
resolver.bInit = true;
#ifdef __APPLE__
// MacOS does not support unnamed semaphores
char sn[64];
snprintf(sn,sizeof(sn),"%s_%d",sem_name,getpid());
resolver.sem = sem_open(sn,O_CREAT,0600,0);
if (resolver.sem==SEM_FAILED)
{
perror("sem_open");
return false;
}
// unlink immediately to remove tails
sem_unlink(sn);
#else
if (sem_init(&resolver._sem,0,0)==-1)
{
perror("sem_init");
return false;
}
resolver.sem = &resolver._sem;
#endif
if (pthread_mutex_init(&resolver.resolve_list_lock, NULL)) return false;
resolver.bStop = false;
resolver.fd_signal_pipe = fd_signal_pipe;
TAILQ_INIT(&resolver.resolve_list);
// start as many threads as we can up to specified number
resolver.thread = malloc(sizeof(pthread_t)*threads);
if (!resolver.thread) goto ex1;
memset(&action,0,sizeof(action));
action.sa_handler = sigbreak;
sigaction(SIG_BREAK, &action, NULL);
for(t=0, resolver.threads=threads ; t<threads ; t++)
{
if (pthread_create(resolver.thread + t, NULL, resolver_thread, NULL))
{
resolver.threads=t;
break;
}
}
if (!resolver.threads)
{
// could not start any threads
free(resolver.thread);
goto ex1;
}
return true;
ex1:
pthread_mutex_destroy(&resolver.resolve_list_lock);
return false;
}
void resolver_deinit(void)
{
if (resolver.bInit)
{
resolver.bStop = true;
// wait all threads to terminate
for (int t = 0; t < resolver.threads; t++)
pthread_kill(resolver.thread[t], SIGUSR1);
for (int t = 0; t < resolver.threads; t++)
{
pthread_kill(resolver.thread[t], SIGUSR1);
pthread_join(resolver.thread[t], NULL);
}
pthread_mutex_destroy(&resolver.resolve_list_lock);
free(resolver.thread);
#ifdef __APPLE__
sem_close(resolver.sem);
#else
sem_destroy(resolver.sem);
#endif
memset(&resolver,0,sizeof(resolver));
}
}
struct resolve_item *resolver_queue(const char *dom, uint16_t port, void *ptr)
{
struct resolve_item *ri = calloc(1,sizeof(struct resolve_item));
if (!ri) return NULL;
strncpy(ri->dom,dom,sizeof(ri->dom));
ri->dom[sizeof(ri->dom)-1] = 0;
ri->port = port;
ri->ptr = ptr;
rlist_lock;
TAILQ_INSERT_TAIL(&resolver.resolve_list, ri, next);
rlist_unlock;
if (sem_post(resolver.sem)<0)
{
perror("resolver_queue sem_post");
free(ri);
return NULL;
}
return ri;
}

22
tpws/resolver.h Normal file
View File

@ -0,0 +1,22 @@
#pragma once
#include <inttypes.h>
#include <stdbool.h>
#include <sys/queue.h>
#include <sys/socket.h>
#include <netdb.h>
struct resolve_item
{
char dom[256]; // request dom
struct sockaddr_storage ss; // resolve result
int ga_res; // getaddrinfo result code
uint16_t port; // request port
void *ptr;
TAILQ_ENTRY(resolve_item) next;
};
struct resolve_item *resolver_queue(const char *dom, uint16_t port, void *ptr);
void resolver_deinit(void);
bool resolver_init(int threads, int fd_signal_pipe);
int resolver_thread_count(void);

View File

@ -32,24 +32,9 @@ SYS_execveat,
#ifdef SYS_exec_with_loader
SYS_exec_with_loader,
#endif
#ifdef SYS_clone
SYS_clone,
#endif
#ifdef SYS_clone2
SYS_clone2,
#endif
#ifdef SYS_clone3
SYS_clone3,
#endif
#ifdef SYS_osf_execve
SYS_osf_execve,
#endif
#ifdef SYS_fork
SYS_fork,
#endif
#ifdef SYS_vfork
SYS_vfork,
#endif
#ifdef SYS_uselib
SYS_uselib,
#endif

View File

@ -280,9 +280,10 @@ void tamper_out(t_ctrack *ctrack, uint8_t *segment,size_t segment_buffer_size,si
else if (params.split_any_protocol && params.split_pos < *size)
*split_pos = params.split_pos;
if (bHaveHost && bBypass && !bHostExcluded && !ctrack->hostname && *params.hostlist_auto_filename)
if (bHaveHost && bBypass && !bHostExcluded && *params.hostlist_auto_filename)
{
DBGPRINT("tamper_out put hostname : %s", Host)
if (ctrack->hostname) free(ctrack->hostname);
ctrack->hostname=strdup(Host);
}
if (params.disorder) *split_flags |= SPLIT_FLAG_DISORDER;
@ -332,7 +333,7 @@ static void auto_hostlist_failed(const char *hostname)
}
else
{
VPRINT("auto hostlist: NOT adding %s", hostname);
VPRINT("auto hostlist : NOT adding %s", hostname);
HOSTLIST_DEBUGLOG_APPEND("%s : NOT adding, duplicate detected", hostname);
}
}
@ -344,6 +345,8 @@ void tamper_in(t_ctrack *ctrack, uint8_t *segment,size_t segment_buffer_size,siz
DBGPRINT("tamper_in hostname=%s", ctrack->hostname)
if (!*params.hostlist_auto_filename) return;
HostFailPoolPurgeRateLimited(&params.hostlist_auto_fail_counters);
if (ctrack->l7proto==HTTP && ctrack->hostname)
@ -376,6 +379,8 @@ void rst_in(t_ctrack *ctrack)
{
DBGPRINT("rst_in hostname=%s", ctrack->hostname)
if (!*params.hostlist_auto_filename) return;
HostFailPoolPurgeRateLimited(&params.hostlist_auto_fail_counters);
if (!ctrack->bTamperInCutoff && ctrack->hostname)
@ -389,6 +394,8 @@ void hup_out(t_ctrack *ctrack)
{
DBGPRINT("hup_out hostname=%s", ctrack->hostname)
if (!*params.hostlist_auto_filename) return;
HostFailPoolPurgeRateLimited(&params.hostlist_auto_fail_counters);
if (!ctrack->bTamperInCutoff && ctrack->hostname)

View File

@ -138,7 +138,8 @@ static void exithelp(void)
" * multiple binds are supported. each bind-addr, bind-iface* start new bind\n"
" --port=<port>\t\t\t\t; only one port number for all binds is supported\n"
" --socks\t\t\t\t; implement socks4/5 proxy instead of transparent proxy\n"
" --no-resolve\t\t\t\t; disable socks5 remote dns ability (resolves are not async, they block all activity)\n"
" --no-resolve\t\t\t\t; disable socks5 remote dns ability\n"
" --resolver-threads=<int>\t\t; number of resolver worker threads\n"
" --local-rcvbuf=<bytes>\n"
" --local-sndbuf=<bytes>\n"
" --remote-rcvbuf=<bytes>\n"
@ -323,14 +324,15 @@ void parse_params(int argc, char *argv[])
{ "remote-sndbuf",required_argument,0,0 },// optidx=46
{ "socks",no_argument,0,0 },// optidx=47
{ "no-resolve",no_argument,0,0 },// optidx=48
{ "skip-nodelay",no_argument,0,0 },// optidx=49
{ "tamper-start",required_argument,0,0 },// optidx=50
{ "tamper-cutoff",required_argument,0,0 },// optidx=51
{ "resolver-threads",required_argument,0,0 },// optidx=49
{ "skip-nodelay",no_argument,0,0 },// optidx=50
{ "tamper-start",required_argument,0,0 },// optidx=51
{ "tamper-cutoff",required_argument,0,0 },// optidx=52
#if defined(BSD) && !defined(__OpenBSD__) && !defined(__APPLE__)
{ "enable-pf",no_argument,0,0 },// optidx=52
{ "enable-pf",no_argument,0,0 },// optidx=53
#elif defined(__linux__)
{ "mss",required_argument,0,0 },// optidx=52
{ "mss-pf",required_argument,0,0 },// optidx=53
{ "mss",required_argument,0,0 },// optidx=53
{ "mss-pf",required_argument,0,0 },// optidx=54
#endif
{ "hostlist-auto-retrans-threshold",optional_argument,0,0}, // ignored. for nfqws command line compatibility
{ NULL,0,NULL,0 }
@ -713,10 +715,18 @@ void parse_params(int argc, char *argv[])
case 48: /* no-resolve */
params.no_resolve = true;
break;
case 49: /* skip-nodelay */
case 49: /* resolver-threads */
params.resolver_threads = atoi(optarg);
if (params.resolver_threads<1 || params.resolver_threads>300)
{
fprintf(stderr, "resolver-threads must be within 1..300\n");
exit_clean(1);
}
break;
case 50: /* skip-nodelay */
params.skip_nodelay = true;
break;
case 50: /* tamper-start */
case 51: /* tamper-start */
{
const char *p=optarg;
if (*p=='n')
@ -729,7 +739,7 @@ void parse_params(int argc, char *argv[])
params.tamper_start = atoi(p);
}
break;
case 51: /* tamper-cutoff */
case 52: /* tamper-cutoff */
{
const char *p=optarg;
if (*p=='n')
@ -743,11 +753,11 @@ void parse_params(int argc, char *argv[])
}
break;
#if defined(BSD) && !defined(__OpenBSD__) && !defined(__APPLE__)
case 52: /* enable-pf */
case 53: /* enable-pf */
params.pf_enable = true;
break;
#elif defined(__linux__)
case 52: /* mss */
case 53: /* mss */
// this option does not work in any BSD and MacOS. OS may accept but it changes nothing
params.mss = atoi(optarg);
if (params.mss<88 || params.mss>32767)
@ -756,7 +766,7 @@ void parse_params(int argc, char *argv[])
exit_clean(1);
}
break;
case 53: /* mss-pf */
case 54: /* mss-pf */
if (!pf_parse(optarg,&params.mss_pf))
{
fprintf(stderr, "Invalid MSS port filter.\n");
@ -780,6 +790,7 @@ void parse_params(int argc, char *argv[])
fprintf(stderr, "Cannot split with --skip-nodelay\n");
exit_clean(1);
}
if (!params.resolver_threads) params.resolver_threads = 5 + params.maxconn/50;
if (*params.hostlist_auto_filename) params.hostlist_auto_mod_time = file_mod_time(params.hostlist_auto_filename);
if (!LoadIncludeHostLists())

View File

@ -22,6 +22,7 @@
#include "tamper.h"
#include "socks.h"
#include "helpers.h"
#include "hostlist.h"
// keep separate legs counter. counting every time thousands of legs can consume cpu
@ -333,7 +334,7 @@ static bool proxy_remote_conn_ack(tproxy_conn_t *conn, int sock_err)
//Createas a socket and initiates the connection to the host specified by
//remote_addr.
//Returns -1 if something fails, >0 on success (socket fd).
static int connect_remote(const struct sockaddr *remote_addr)
static int connect_remote(const struct sockaddr *remote_addr, bool bApplyConnectionFooling)
{
int remote_fd = 0, yes = 1, no = 0, v;
@ -351,7 +352,7 @@ static int connect_remote(const struct sockaddr *remote_addr)
close(remote_fd);
return -1;
}
if(setsockopt(remote_fd, SOL_SOCKET, SO_REUSEADDR, &yes, sizeof(yes)) < 0)
if (setsockopt(remote_fd, SOL_SOCKET, SO_REUSEADDR, &yes, sizeof(yes)) < 0)
{
perror("setsockopt (SO_REUSEADDR, connect_remote)");
close(remote_fd);
@ -359,7 +360,7 @@ static int connect_remote(const struct sockaddr *remote_addr)
}
if (!set_socket_buffers(remote_fd, params.remote_rcvbuf, params.remote_sndbuf))
return -1;
if(!set_keepalive(remote_fd))
if (!set_keepalive(remote_fd))
{
perror("set_keepalive");
close(remote_fd);
@ -371,7 +372,7 @@ static int connect_remote(const struct sockaddr *remote_addr)
close(remote_fd);
return -1;
}
if (params.mss)
if (bApplyConnectionFooling && params.mss)
{
uint16_t port = saport(remote_addr);
if (pf_in_range(port,&params.mss_pf))
@ -389,7 +390,7 @@ static int connect_remote(const struct sockaddr *remote_addr)
VPRINT("Not setting MSS. Port %u is out of MSS port range.",port)
}
}
if(connect(remote_fd, remote_addr, remote_addr->sa_family == AF_INET ? sizeof(struct sockaddr_in) : sizeof(struct sockaddr_in6)) < 0)
if (connect(remote_fd, remote_addr, remote_addr->sa_family == AF_INET ? sizeof(struct sockaddr_in) : sizeof(struct sockaddr_in6)) < 0)
{
if(errno != EINPROGRESS)
{
@ -417,6 +418,7 @@ static void free_conn(tproxy_conn_t *conn)
conn_free_buffers(conn);
if (conn->partner) conn->partner->partner=NULL;
if (conn->track.hostname) free(conn->track.hostname);
if (conn->socks_ri) conn->socks_ri->ptr = NULL; // detach conn
free(conn);
}
static tproxy_conn_t *new_conn(int fd, bool remote)
@ -536,7 +538,7 @@ static tproxy_conn_t* add_tcp_connection(int efd, struct tailhead *conn_list,int
if (proxy_type==CONN_TYPE_TRANSPARENT)
{
if ((remote_fd = connect_remote((struct sockaddr *)&orig_dst)) < 0)
if ((remote_fd = connect_remote((struct sockaddr *)&orig_dst, true)) < 0)
{
fprintf(stderr, "Failed to connect\n");
close(local_fd);
@ -700,7 +702,14 @@ bool proxy_mode_connect_remote(const struct sockaddr *sa, tproxy_conn_t *conn, s
return false;
}
if ((remote_fd = connect_remote(sa)) < 0)
bool bConnFooling=true;
if (conn->track.hostname && params.mss)
{
VPRINT("0-phase desync hostlist check")
bConnFooling=HostlistCheck(conn->track.hostname, NULL);
}
if ((remote_fd = connect_remote(sa, bConnFooling)) < 0)
{
fprintf(stderr, "socks failed to connect (1) errno=%d\n", errno);
socks_send_rep_errno(conn->socks_ver, conn->fd, errno);
@ -726,7 +735,7 @@ bool proxy_mode_connect_remote(const struct sockaddr *sa, tproxy_conn_t *conn, s
TAILQ_INSERT_HEAD(conn_list, conn->partner, conn_ptrs);
legs_remote++;
print_legs();
DBGPRINT("socks connecting")
DBGPRINT("S_WAIT_CONNECTION")
conn->socks_state = S_WAIT_CONNECTION;
return true;
}
@ -865,10 +874,8 @@ static bool handle_proxy_mode(tproxy_conn_t *conn, struct tailhead *conn_list)
((struct sockaddr_in6*)&ss)->sin6_scope_id = 0;
break;
case S5_ATYP_DOM:
// NOTE : resolving is blocking. do you want it really ?
{
struct addrinfo *ai,hints;
char sdom[256];
int r;
uint16_t port;
char sport[6];
@ -886,26 +893,18 @@ static bool handle_proxy_mode(tproxy_conn_t *conn, struct tailhead *conn_list)
socks5_send_rep(conn->fd,S5_REP_HOST_UNREACHABLE);
return false;
}
snprintf(sport,sizeof(sport),"%u",port);
memcpy(sdom,m->dd.domport,m->dd.len);
sdom[m->dd.len] = '\0';
DBGPRINT("socks5 resolving hostname '%s' port '%s'",sdom,sport)
memset(&hints, 0, sizeof(struct addrinfo));
hints.ai_socktype = SOCK_STREAM;
r=getaddrinfo(sdom,sport,&hints,&ai);
if (r)
m->dd.domport[m->dd.len] = 0;
DBGPRINT("socks5 queue resolve hostname '%s' port '%u'",m->dd.domport,port)
conn->socks_ri = resolver_queue(m->dd.domport,port,conn);
if (!conn->socks_ri)
{
VPRINT("socks5 getaddrinfo error %d",r)
socks5_send_rep(conn->fd,S5_REP_HOST_UNREACHABLE);
VPRINT("socks5 could not queue resolve item")
socks5_send_rep(conn->fd,S5_REP_GENERAL_FAILURE);
return false;
}
if (params.debug>=2)
{
printf("socks5 hostname resolved to :\n");
print_addrinfo(ai);
}
memcpy(&ss,ai->ai_addr,ai->ai_addrlen);
freeaddrinfo(ai);
conn->socks_state=S_WAIT_RESOLVE;
DBGPRINT("S_WAIT_RESOLVE")
return true;
}
break;
default:
@ -927,6 +926,40 @@ static bool handle_proxy_mode(tproxy_conn_t *conn, struct tailhead *conn_list)
return false;
}
static bool resolve_complete(struct resolve_item *ri, struct tailhead *conn_list)
{
tproxy_conn_t *conn = (tproxy_conn_t *)ri->ptr;
if (conn && (conn->state != CONN_CLOSED))
{
if (conn->socks_state==S_WAIT_RESOLVE)
{
DBGPRINT("resolve_complete %s. getaddrinfo result %d", ri->dom, ri->ga_res);
if (ri->ga_res)
{
socks5_send_rep(conn->fd,S5_REP_HOST_UNREACHABLE);
return false;;
}
else
{
if (!conn->track.hostname)
{
DBGPRINT("resolve_complete put hostname : %s", ri->dom)
conn->track.hostname = strdup(ri->dom);
}
return proxy_mode_connect_remote((struct sockaddr *)&ri->ss,conn,conn_list);
}
}
else
fprintf(stderr, "resolve_complete: conn in wrong socks_state !!! (%s)\n", ri->dom);
}
else
DBGPRINT("resolve_complete: orphaned resolve for %s", ri->dom);
return true;
}
static bool in_tamper_out_range(tproxy_conn_t *conn)
{
return (params.tamper_start_n ? (conn->tnrd+1) : conn->trd) >= params.tamper_start &&
@ -1226,6 +1259,31 @@ static void conn_close_with_partner_check(struct tailhead *conn_list, struct tai
}
}
static bool handle_resolve_pipe(tproxy_conn_t **conn, struct tailhead *conn_list, int fd)
{
ssize_t rd;
struct resolve_item *ri;
bool b;
rd = read(fd,&ri,sizeof(void*));
if (rd<0)
{
perror("resolve_pipe read");
return false;
}
else if (rd!=sizeof(void*))
{
// partial pointer read is FATAL. in any case it will cause pointer corruption and coredump
fprintf(stderr,"resolve_pipe not full read %zu\n",rd);
exit(1000);
}
b = resolve_complete(ri, conn_list);
*conn = (tproxy_conn_t *)ri->ptr;
if (*conn) (*conn)->socks_ri = NULL;
free(ri);
return b;
}
int event_loop(const int *listen_fd, size_t listen_fd_ct)
{
int retval = 0, num_events = 0;
@ -1239,9 +1297,12 @@ int event_loop(const int *listen_fd, size_t listen_fd_ct)
size_t sct;
struct sockaddr_storage accept_sa;
socklen_t accept_salen;
int resolve_pipe[2];
if (!listen_fd_ct) return -1;
resolve_pipe[0]=resolve_pipe[1]=0;
legs_local = legs_remote = 0;
//Initialize queue (remember that TAILQ_HEAD just defines the struct)
TAILQ_INIT(&conn_list);
@ -1272,6 +1333,34 @@ int event_loop(const int *listen_fd, size_t listen_fd_ct)
goto ex;
}
}
if ((params.proxy_type==CONN_TYPE_SOCKS) && !params.no_resolve)
{
if (pipe(resolve_pipe)==-1)
{
perror("pipe (resolve_pipe)");
retval = -1;
goto ex;
}
if (fcntl(resolve_pipe[0], F_SETFL, O_NONBLOCK) < 0)
{
perror("resolve_pipe set O_NONBLOCK");
retval = -1;
goto ex;
}
ev.data.ptr = NULL;
if (epoll_ctl(efd, EPOLL_CTL_ADD, resolve_pipe[0], &ev) == -1) {
perror("epoll_ctl (listen socket)");
retval = -1;
goto ex;
}
if (!resolver_init(params.resolver_threads,resolve_pipe[1]))
{
fprintf(stderr,"could not initialize multithreaded resolver\n");
retval = -1;
goto ex;
}
VPRINT("initialized multi threaded resolver with %d threads",resolver_thread_count());
}
for(;;)
{
@ -1290,8 +1379,21 @@ int event_loop(const int *listen_fd, size_t listen_fd_ct)
for (i = 0; i < num_events; i++)
{
conn = (tproxy_conn_t*)events[i].data.ptr;
if (!conn)
{
DBGPRINT("\nEVENT mask %08X resolve_pipe",events[i].events)
if (events[i].events & EPOLLIN)
{
DBGPRINT("EPOLLIN")
if (!handle_resolve_pipe(&conn, &conn_list, resolve_pipe[0]))
{
DBGPRINT("handle_resolve_pipe false")
if (conn) close_tcp_conn(&conn_list,&close_list,conn);
}
}
continue;
}
conn->event_count++;
if (conn->listener)
{
DBGPRINT("\nEVENT mask %08X fd=%d accept",events[i].events,conn->fd)
@ -1352,7 +1454,7 @@ int event_loop(const int *listen_fd, size_t listen_fd_ct)
VPRINT("Socket fd=%d (partner_fd=%d, remote=%d) %s so_error=%d (%s)",conn->fd,conn->partner ? conn->partner->fd : 0,conn->remote,se,errn,strerror(errn));
proxy_remote_conn_ack(conn,errn);
read_all_and_buffer(conn,3);
if (errn==ECONNRESET && conn->remote && conn_partner_alive(conn)) rst_in(&conn->partner->track);
if (errn==ECONNRESET && conn->remote && params.tamper && conn_partner_alive(conn)) rst_in(&conn->partner->track);
conn_close_with_partner_check(&conn_list,&close_list,conn);
continue;
@ -1370,7 +1472,7 @@ int event_loop(const int *listen_fd, size_t listen_fd_ct)
{
DBGPRINT("EPOLLRDHUP")
read_all_and_buffer(conn,2);
if (!conn->remote) hup_out(&conn->track);
if (!conn->remote && params.tamper) hup_out(&conn->track);
if (conn_has_unsent(conn))
{
@ -1419,6 +1521,9 @@ int event_loop(const int *listen_fd, size_t listen_fd_ct)
ex:
if (efd) close(efd);
if (listen_conn) free(listen_conn);
resolver_deinit();
if (resolve_pipe[0]) close(resolve_pipe[0]);
if (resolve_pipe[1]) close(resolve_pipe[1]);
return retval;
}

View File

@ -6,6 +6,7 @@
#include <time.h>
#include "tamper.h"
#include "params.h"
#include "resolver.h"
#define BACKLOG 10
#define MAX_EPOLL_EVENTS 64
@ -59,10 +60,12 @@ struct tproxy_conn
enum {
S_WAIT_HANDSHAKE=0,
S_WAIT_REQUEST,
S_WAIT_RESOLVE,
S_WAIT_CONNECTION,
S_TCP
} socks_state;
uint8_t socks_ver;
struct resolve_item *socks_ri;
// these value are used in flow control. we do not use ET (edge triggered) polling
// if we dont disable notifications they will come endlessly until condition becomes false and will eat all cpu time