Docker – Victor Laskin's Blog https://vitiy.info Programming, architecture and design (С++, QT, .Net/WPF, Android, iOS, NoSQL, distributed systems, mobile development, image processing, etc...) Sun, 05 Jul 2015 13:07:09 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.2 Writing custom protocol for nanomsg https://vitiy.info/writing-custom-protocol-for-nanomsg/ https://vitiy.info/writing-custom-protocol-for-nanomsg/#comments Tue, 09 Dec 2014 11:24:04 +0000 http://vitiy.info/?p=397 Custom protocol for nanomsg

Nanomsg is next version of ZeroMQ lib, providing smart cross-platform sockets for implementation of distributed architectures. Here you can find basic examples of included protocols (communication patterns). Lib is simple (written in pure C) and does not have any dependencies like boost. And as this is at least 3rd iteration from same author you can expect some quality/performance here.

This is kind of solution for the hell of writing of your own serious socket server. If you already had such experience you should understand the range of problems which are not so obvious at start. But here we expect to skip all such problems and go straight to processing messages. Lib handles automatic reconnection in case of link disconnects, nonblocking receiving/sending, sockets which can handle large set of clients, etc. All this seems like perfect solution for server-side inner transport of fast distributed architectures.

But i also want to try it outside. The basic communication patterns (PAIR, BUS, REQREP, PUBSUB, PIPELINE, SURVEY) may fit large set of inner server transport schemes, but there are some minor limits in current implementation for client side application. I mean limits of protocols, not the lib itself.

Unfortunately, current version of PUBSUB protocol does filtering on subscriber side. So ‘subscribing clients’ will receive all message flow and this is unbearable for me.

BUS protocol requires full-linked scheme:

nanomsg bus protocol

I expect BUS-like protocol to work in more sparse conditions.

As nanomsg is open-source lib under nice licence (MIT/X11 license) – first thought was to extend some of existing protocols to meet my needs.

Why new protocol?

As i wanted to try these smart sockets for external clients, to meet today’s reality i’m assuming each client has set of devices, which are connected simultaneously to some network service.

At first i aimed to create some complex routing protocol, but than came up with more simple approach: I want to create custom protocol as fusion of BUS and SUB/PUB protocols (here i refer it as SUBBUS).

Scheme:

SUBBUS Protocol Scheme 1

 

Black lines are initial connections. Coloured lines are messages. This scheme contains 2 clients Bob and John. John has 2 devices and Bob is geek, so he has 4 devices simultaneously connected to server node. Each message from client device goes to other devices of same client. You can look at this scheme as two BUS protocols separated by subscription.

This gives ability to perform instant cloud synchronisation, simultaneous operation from multiple devices and other various fun stuff.

Possible inner structure (there can other ways):

  • Each node has the list of subscriptions (socket option as list of strings) i.e. /users/john/ or /chats/chat15/.
  • Subscription filtering is done on sending side (This is important if you have large number of clients – each of them don’t have to receive all messages. Not only for saving bandwidth but also for security reasons.) So client should somehow send his subscription list to server (subscription forwarding). In case of reconnect this information should be also resent again. While subscriptions were not sent client should receive nothing.
  • Each message should contain routing prefix (header) i.e. /users/john/ or /chats/chat15/
  • Each node should have tree of connected client subscriptions which contains pipe lists as leafs. Sending operation uses this tree to send to subscribed range of clients.
  • Each message from client node should be transmitted to other nodes within same subscription (forwarding). This is done before server side processing and aimed to speed up message propagation between devices. Some optional filters can be added here.
  • [Optional] SSL-like encryption for each pipe
  • All this stuff should be as simple as possible

Its not too complicated to start writing your own protocol for nanomsg. The only problem is that lib is written in pure C – so you must be a bit ready for it. Go to src/protocols folder. It contains all protocols sources you can explore. Mostly they simply implement the list of given methods, which are described inside src/protocol.h:

/*  To be implemented by individual socket types. */
struct nn_sockbase_vfptr {

    /*  Ask socket to stop. */
    void (*stop) (struct nn_sockbase *self);

    /*  Deallocate the socket. */
    void (*destroy) (struct nn_sockbase *self);

    /*  Management of pipes. 'add' registers a new pipe. The pipe cannot be used
        to send to or to be received from at the moment. 'rm' unregisters the
        pipe. The pipe should not be used after this call as it may already be
        deallocated. 'in' informs the socket that pipe is readable. 'out'
        informs it that it is writable. */
    int (*add) (struct nn_sockbase *self, struct nn_pipe *pipe);
    void (*rm) (struct nn_sockbase *self, struct nn_pipe *pipe);
    void (*in) (struct nn_sockbase *self, struct nn_pipe *pipe);
    void (*out) (struct nn_sockbase *self, struct nn_pipe *pipe);

    /*  Return any combination of event flags defined above, thus specifying
        whether the socket should be readable, writable, both or none. */
    int (*events) (struct nn_sockbase *self);

    /*  Send a message to the socket. Returns -EAGAIN if it cannot be done at
        the moment or zero in case of success. */
    int (*send) (struct nn_sockbase *self, struct nn_msg *msg);

    /*  Receive a message from the socket. Returns -EAGAIN if it cannot be done
        at the moment or zero in case of success. */
    int (*recv) (struct nn_sockbase *self, struct nn_msg *msg);

    /*  Set a protocol specific option. */
    int (*setopt) (struct nn_sockbase *self, int level, int option,
        const void *optval, size_t optvallen);

    /*  Retrieve a protocol specific option. */
    int (*getopt) (struct nn_sockbase *self, int level, int option,
        void *optval, size_t *optvallen);
};

So you can just clone some protocol as base foundation for your own – i took bus folder and cloned it to subbus. I renamed everything inside from ‘bus’ to ‘subbus’ using find/replace. In root src folder there is bus.h file which contains only list of consts for protocol access. You also need to clone it under your new protocol name (subbus.h in my case). Next steps are to add new protocol to makefile and socket types list.

Add to makefile.am:

NANOMSG_PROTOCOLS = \
    $(PROTOCOLS_BUS) \
    $(PROTOCOLS_SUBBUS) \ .....

PROTOCOLS_SUBBUS = \
    src/protocols/subbus/subbus.h \
    src/protocols/subbus/subbus.c \
    src/protocols/subbus/xsubbus.h \
    src/protocols/subbus/xsubbus.c

Add protocol to /core/symbol.c

{NN_BUS, "NN_BUS", NN_NS_PROTOCOL,
        NN_TYPE_NONE, NN_UNIT_NONE},
{NN_SUBBUS, "NN_SUBBUS", NN_NS_PROTOCOL,
        NN_TYPE_NONE, NN_UNIT_NONE},

Add protocol’s socket types into supported list inside /core/global.c (don’t forget includes):

/*  Plug in individual socktypes. */
  
...
    nn_global_add_socktype (nn_bus_socktype);
    nn_global_add_socktype (nn_xbus_socktype);
    nn_global_add_socktype (nn_subbus_socktype);
    nn_global_add_socktype (nn_xsubbus_socktype);

After that i grabbed one of examples for bus protocol from here and changed socket creation part:

#include "../nanomsg/src/subbus.h"

int node (const int argc, const char **argv)
{
  int sock = nn_socket (AF_SP, NN_SUBBUS);
  if (sock < 0)
  {
    printf ("nn_socket failed with error code %d\n", nn_errno ());
    if (errno == EINVAL) printf("%s\n", "Unknown protocol");
  }

...

After that sample should compile and work. If you failed to add your protocol copy to socket types list you will get Unknown protocol error.

Here is complete Dockerfile i use to build&run simple test. It gets latest nanomsg from github, modifies sources to include new protocol, copies protocol source from host, builds the lib and protocol test.

# THIS DOCKERFILE COMPILES Custom Nanomsg protocol + sample under Ubuntu
 
FROM ubuntu

MAINTAINER Victor Laskin "victor.laskin@gmail.com"

# Install compilation tools

RUN apt-get update && apt-get install -y \
    automake \
    build-essential \
    wget \
    p7zip-full \
    bash \
    curl \
    git \
    sed \
    libtool

# Get latest Nanomsg build from github

RUN mkdir /nanomsg && cd nanomsg
WORKDIR /nanomsg

RUN git clone https://github.com/nanomsg/nanomsg.git && ls


# Modify nanomsg files to register new protocol

RUN cd nanomsg && sed -i '/include "..\/bus.h"/a #include "..\/subbus.h"' src/core/symbol.c && \
	sed -i '/"NN_BUS", NN_NS_PROTOCOL,/a NN_TYPE_NONE, NN_UNIT_NONE}, \n\
    {NN_SUBBUS, "NN_SUBBUS", NN_NS_PROTOCOL,' src/core/symbol.c && \
	cat src/core/symbol.c && \
	sed -i '/#include "..\/protocols\/bus\/xbus.h"/a #include "..\/protocols\/subbus\/subbus.h" \n\#include "..\/protocols\/subbus\/xsubbus.h"' src/core/global.c && \
	sed -i '/nn_global_add_socktype (nn_xbus_socktype);/a nn_global_add_socktype (nn_subbus_socktype); \n\
    nn_global_add_socktype (nn_xsubbus_socktype);' src/core/global.c && \
	cat src/core/global.c | grep nn_global_add_socktype

# Modify Makefile.am 

RUN cd nanomsg && sed -i '/xbus.c/a \\n\
PROTOCOLS_SUBBUS = \\\n\
    src/protocols/subbus/subbus.h \\\n\
    src/protocols/subbus/subbus.c \\\n\
    src/protocols/subbus/xsubbus.h \\\n\
    src/protocols/subbus/xsubbus.c \n\
    \\
    ' Makefile.am && \
    sed -i '/$(PROTOCOLS_BUS)/a $(PROTOCOLS_SUBBUS) \\\
    ' Makefile.am && cat Makefile.am 


# This is temporal fix - DISABLE STATS
RUN sed -i '/nn_global_submit_statistics ();/i if (0)' nanomsg/src/core/global.c

# Get custom protocol source (copy from host)

RUN mkdir nanomsg/src/protocols/subbus
COPY subbus.h /nanomsg/nanomsg/src/
COPY subbus/*.c /nanomsg/nanomsg/src/protocols/subbus/
COPY subbus/*.h /nanomsg/nanomsg/src/protocols/subbus/

# Build nanomsg lib

RUN cd nanomsg && ./autogen.sh && ./configure && make && ls .libs

# Get and build custom protocol test

RUN mkdir test
COPY testsubbus.c /nanomsg/test/
COPY test.sh /nanomsg/test/
RUN cd test && ls && gcc -pthread testsubbus.c ../nanomsg/.libs/libnanomsg.a -o testbus -lanl && ls

# Set port and entry point

EXPOSE 1234 1235 1236 1237 1238 1239 1240
ENTRYPOINT cd /nanomsg/test/ && ./test.sh

Note: the lib is still beta (0.5-beta, released on November 14th, 2014) so you could expect something yet not polished there. Inside script you could find the line which disables statistics as it has some blocking bug at the moment but i expect it to be fixed very soon as the fix was pulled already.

Docker is optional way to build this, of course, and you can modify this Dockerfile to simple client script. Don’t forget to change the name of your protocol.

Modifications i made

I will not paste here the cuts of source code as it will make the post too messy. This is plain old C so even simple things tend to be a bit longer there. So i will note some main steps of my implementation. Keep in mind that thats only my approach and everything can be done another way. 

I modified nn_xsubbus_setopt to set subscriptions (i use linked list to store the list of local subscriptions).

I have two trees to speed up all process of communication routing. First tree contains descriptions of client subscriptions by pipe id (nn_pipe*). Also it contains the flag if this node’s subscriptions were sent to this pipe for first time. To make this tree more balanced i use some hash of pointer to pipe as binary tree key.

This tree is used in nn_xsubbus_addnn_xsubbus_rmnn_xsubbus_out functions to synchronise subscription lists. nn_xsubbus_add is called when new pipe is connected and there we add new leaf into the tree. nn_xsubbus_out tells that pipe is writable so we can send our list of subscriptions to other side (if we have not already done it). nn_xsubbus_rm – pipe was removed.

Second tree is used for main sending operation and gives the list of pipes by subscription string key. As starting point i took triple tree where each node contains actual list of connected pipes. nn_xsubbus_send method splits header from each message and sends it to corresponding tree part.

When new message arrives inside nn_xsubbus_recv there is check of header, and if it starts from special mark of the list of subscriptions – we add this list into the second tree. If message is ‘normal’ there is sending to other pipes of same subscription (message forwarding as BUS protocol wants).

Note, that trees should work as persistent trees in multithread environment. I prefer some non locking structures here. Current implementation does not clean up chains of disconnected leafs (just removes the pipes) to achieve this simple way. Some tree rebalancing algorithm would be nice to add in future.

As test i slightly modified bus test sample to set subscription from argv[2] as socket option and prepend message by current subscription.

./testbus node0 / tcp://127.0.0.1:1234 & node0=$!
./testbus node1 /USER/JOHN/ tcp://127.0.0.1:1235 tcp://127.0.0.1:1234 & node1=$!
./testbus node2 /USER/BOB/ tcp://127.0.0.1:1236 tcp://127.0.0.1:1234 & node2=$!
./testbus node3 /USER/JOHN/ tcp://127.0.0.1:1237 tcp://127.0.0.1:1234 & node3=$!
./testbus node4 /USER/BOB/ tcp://127.0.0.1:1238 tcp://127.0.0.1:1234 & node4=$!
./testbus node5 /USER/BOB/ tcp://127.0.0.1:1239 tcp://127.0.0.1:1234 & node5=$!
./testbus node6 /USER/BOB/ tcp://127.0.0.1:1240 tcp://127.0.0.1:1234 & node6=$!

Here is the part of test output (for Bob):

node5: RECEIVED '/USER/BOB/=node2 18' 20 FROM BUS
node4: RECEIVED '/USER/BOB/=node2 18' 20 FROM BUS
node6: RECEIVED '/USER/BOB/=node2 18' 20 FROM BUS
node2: RECEIVED '/USER/BOB/=node5 18' 20 FROM BUS
node5: RECEIVED '/USER/BOB/=node6 18' 20 FROM BUS
node4: RECEIVED '/USER/BOB/=node5 18' 20 FROM BUS
node6: RECEIVED '/USER/BOB/=node5 18' 20 FROM BUS
node2: RECEIVED '/USER/BOB/=node6 18' 20 FROM BUS
node5: RECEIVED '/USER/BOB/=node4 18' 20 FROM BUS
node2: RECEIVED '/USER/BOB/=node4 18' 20 FROM BUS
node4: RECEIVED '/USER/BOB/=node6 18' 20 FROM BUS
node6: RECEIVED '/USER/BOB/=node4 18' 20 FROM BUS

As you can see there is bus between node2, node4, node5, node6.

I will post the sources here after i perform some tests with large set of clients, some stress tests and so on.

]]>
https://vitiy.info/writing-custom-protocol-for-nanomsg/feed/ 5
Dockerfile example how to compile libcurl for Android inside Docker container https://vitiy.info/dockerfile-example-to-compile-libcurl-for-android-inside-docker-container/ https://vitiy.info/dockerfile-example-to-compile-libcurl-for-android-inside-docker-container/#comments Tue, 28 Oct 2014 09:07:11 +0000 http://vitiy.info/?p=354 Update: 5 july 2015 – See updated version of dockerfile at the end of post with new NDK / SSL / clang toolchain.

If you never heard of Docker be sure to check it out as fast as possible. There are lot of publications out there. At first it looks like another virtualisation software but it is actually more like new paradigm. Someone may call it very advanced chroot, someone may call virtual containers with version control and building scripts, and so on. I like it as the idea of application-centric containers – your application can keep whole operating system as a coating and its making perfect separation from outside influence. As well you can easily reproduce production process environment at another location. It makes virtualisation easy and fun.

Libcurl in docker

Almost everything can be done inside containers now. Recently i had to recompile curl for Android as static lib using latest NDK toolchain. Its not so complicated to do on your local machine (if it is not Windows) but now there is a more clean way to do this time-wasting operation. You can go to digital ocean, create droplet with Docker and using Dockerfile from the end of this post compile it while drinking coffee.

Dockerfile is simple script to automate container image creation. In our case script will

  • setup compilation tools / utils
  • download sdk/ndk
  • create custom cross-compilation toolchain
  • download source code for libs (zlib, lib curl)
  • setup environment settings for cross compilation
  • configure and make libs
  • gather output at one folder and create the way to get compiled libs

 

So lets create it step by step:

# THIS DOCKERFILE TRIES TO COMPILE CURL FOR ANDROID
 
FROM ubuntu

MAINTAINER Victor Laskin "victor.laskin@gmail.com"

FROM field is describing source OS image. You can specify version of ubuntu or choose something else.

# Install compilation tools

RUN apt-get update && apt-get install -y \
    automake \
    build-essential \
    wget \
    p7zip-full \
    bash \
    curl

As next step there is installation of compilation tools and some utils. Note that each command inside docker file produces intermediate image which is use like cache if you run your docker file again. This saves you a lot of time when you are tuning or recreating your container image with different options.

# Download SDK / NDK

RUN mkdir /Android && cd Android && mkdir output
WORKDIR /Android

RUN wget http://dl.google.com/android/android-sdk_r23.0.2-linux.tgz
RUN wget http://dl.google.com/android/ndk/android-ndk-r10c-linux-x86_64.bin

Here we download SDK / NDK using official links from Google – you can edit it to more recent versions. Also we create /Android folder and use it as WORKDIR. Any new command will run in this dir.

# Extracting ndk/sdk

RUN tar -xvzf android-sdk_r23.0.2-linux.tgz && \
	chmod a+x android-ndk-r10c-linux-x86_64.bin && \
	7z x android-ndk-r10c-linux-x86_64.bin

Extraction of NDK produces more then 4.5GB (for now). Docker aims to make containers as light as possible so watch out for free space at your storage as images can consume it pretty fast if don’t clean up them.

# Set ENV variables

ENV ANDROID_HOME /Android/android-sdk-linux
ENV NDK_ROOT /Android/android-ndk-r10c
ENV PATH $PATH:$ANDROID_HOME/tools
ENV PATH $PATH:$ANDROID_HOME/platform-tools

ENV command is setting environment variable. Note that you can’t use export command as you  do in sh scripts to set local compilation variables because every command in Dockerfile produces new image. Here we just set SDK/NDK folders.

# Make stand alone toolchain (Modify platform / arch here)

RUN mkdir=toolchain-arm && bash $NDK_ROOT/build/tools/make-standalone-toolchain.sh --verbose --platform=android-19 --install-dir=toolchain-arm --arch=arm --toolchain=arm-linux-androideabi-4.9 --system=linux-x86_64

ENV TOOLCHAIN /Android/toolchain-arm
ENV SYSROOT $TOOLCHAIN/sysroot
ENV PATH $PATH:$TOOLCHAIN/bin:$SYSROOT/usr/local/bin

As next step we create stand alone compilation toolchain using script from NDK. As parameters you select platform and arch. This is example is for ARM but I’m sure you will modify it with easy to make several builds for couple of arms and x86. After creation of toolchain we add it to PATH env variable.

# Configure toolchain path

ENV ARCH armv7

ENV CROSS_COMPILE arm-linux-androideabi
ENV CC arm-linux-androideabi-gcc
ENV CXX arm-linux-androideabi-g++
ENV AR arm-linux-androideabi-ar
ENV AS arm-linux-androideabi-as
ENV LD arm-linux-androideabi-ld
ENV RANLIB arm-linux-androideabi-ranlib
ENV NM arm-linux-androideabi-nm
ENV STRIP arm-linux-androideabi-strip
ENV CHOST arm-linux-androideabi

ENV CPPFLAGS -std=c++11

Here we set ENV variables for cross-compilation. CC – main C compiler.

# download, configure and make Zlib

RUN curl -O http://zlib.net/zlib-1.2.8.tar.gz && \
	tar -xzf zlib-1.2.8.tar.gz && \
	mv zlib-1.2.8 zlib
RUN cd zlib && ./configure --static && \
	make && \
	ls -hs . && \
	cp libz.a /Android/output

Here we download last version of ZLib (you can change the link to latest version), extract, configure and make it using –static parameter. And we doing it using newly created android toolchain. As last step we put result libz.a into /Android/output folder where we will put all the results.

# Download and extract curl

ENV CFLAGS -v -DANDROID --sysroot=$SYSROOT -mandroid -march=$ARCH -mfloat-abi=softfp -mfpu=vfp -mthumb
ENV CPPFLAGS $CPPFLAGS $CFLAGS
ENV LDFLAGS -L${TOOLCHAIN}/include


RUN curl -O http://curl.haxx.se/download/curl-7.38.0.tar.gz && \
	tar -xzf curl-7.38.0.tar.gz
RUN cd curl-7.38.0 && ./configure --host=arm-linux-androideabi --disable-shared --enable-static --disable-dependency-tracking --with-zlib=/Android/zlib --without-ca-bundle --without-ca-path --enable-ipv6 --disable-ftp --disable-file --disable-ldap --disable-ldaps --disable-rtsp --disable-proxy --disable-dict --disable-telnet --disable-tftp --disable-pop3 --disable-imap --disable-smtp --disable-gopher --disable-sspi --disable-manual --target=arm-linux-androideabi --build=x86_64-unknown-linux-gnu || cat config.log

# Make curl 

RUN cd curl-7.38.0 && \
	make && \
	ls lib/.libs/ && \
	cp lib/.libs/libcurl.a /Android/output && \
	ls -hs /Android/output && \
	mkdir /output

The most important step – here we download curl. Configure script of curl has a lot of parameters. I don’t use SSL at the moment so i don’t need to compile OpenSSL (may be i will add it later here). The most important params here are –host=arm-linux-androideabi –target=arm-linux-androideabi as they identify cross-compilation for configuration script.

Also here we setup compilation options CFLAGS / LDFLAGS. You can tune this to your content.

Note – if something goes wrong during ./configure run you can append “|| cat config.log” to the end of line to see your errors during the building of container image.

# To get the results run container with output folder
# Example: docker run -v HOSTFOLDER:/output --rm=true IMAGENAME 

ENTRYPOINT cp -r /Android/output/* /output

Last step is to set ENTRYPOINT. We have to provide the result of our compilation to host OS (or we can easily make some web server and host it – but i decided to go first way).

To build container put Dockerfile into some folder on host OS and run

docker build -t android/curl .

android/curl here is just a name i gave to new created image. The process will consume some time. You can see the list of created images using command: docker images

And we have to run container only to copy output files to host:

docker run -v ~/Android/output:/output --rm=true android/curl

Key -v is mounting local ~/Android/output folder to container’s /output folder.

See whole Dockerfile:

# THIS DOCKERFILE TRIES TO COMPILE CURL FOR ANDROID
 
FROM ubuntu

MAINTAINER Victor Laskin "victor.laskin@gmail.com"

# Install compilation tools

RUN apt-get update && apt-get install -y \
    automake \
    build-essential \
    wget \
    p7zip-full \
    bash \
    curl


# Download SDK / NDK

RUN mkdir /Android && cd Android && mkdir output
WORKDIR /Android

RUN wget http://dl.google.com/android/android-sdk_r23.0.2-linux.tgz
RUN wget http://dl.google.com/android/ndk/android-ndk-r10c-linux-x86_64.bin

# Extracting ndk/sdk

RUN tar -xvzf android-sdk_r23.0.2-linux.tgz && \
	chmod a+x android-ndk-r10c-linux-x86_64.bin && \
	7z x android-ndk-r10c-linux-x86_64.bin

# Set ENV variables

ENV ANDROID_HOME /Android/android-sdk-linux
ENV NDK_ROOT /Android/android-ndk-r10c
ENV PATH $PATH:$ANDROID_HOME/tools
ENV PATH $PATH:$ANDROID_HOME/platform-tools

# Make stand alone toolchain (Modify platform / arch here)

RUN mkdir=toolchain-arm && bash $NDK_ROOT/build/tools/make-standalone-toolchain.sh --verbose --platform=android-19 --install-dir=toolchain-arm --arch=arm --toolchain=arm-linux-androideabi-4.9 --system=linux-x86_64

ENV TOOLCHAIN /Android/toolchain-arm
ENV SYSROOT $TOOLCHAIN/sysroot
ENV PATH $PATH:$TOOLCHAIN/bin:$SYSROOT/usr/local/bin

# Configure toolchain path

ENV ARCH armv7

ENV CROSS_COMPILE arm-linux-androideabi
ENV CC arm-linux-androideabi-gcc
ENV CXX arm-linux-androideabi-g++
ENV AR arm-linux-androideabi-ar
ENV AS arm-linux-androideabi-as
ENV LD arm-linux-androideabi-ld
ENV RANLIB arm-linux-androideabi-ranlib
ENV NM arm-linux-androideabi-nm
ENV STRIP arm-linux-androideabi-strip
ENV CHOST arm-linux-androideabi

ENV CPPFLAGS -std=c++11

# download, configure and make Zlib

RUN curl -O http://zlib.net/zlib-1.2.8.tar.gz && \
	tar -xzf zlib-1.2.8.tar.gz && \
	mv zlib-1.2.8 zlib
RUN cd zlib && ./configure --static && \
	make && \
	ls -hs . && \
	cp libz.a /Android/output

# Download and extract curl

ENV CFLAGS -v -DANDROID --sysroot=$SYSROOT -mandroid -march=$ARCH -mfloat-abi=softfp -mfpu=vfp -mthumb
ENV CPPFLAGS $CPPFLAGS $CFLAGS
ENV LDFLAGS -L${TOOLCHAIN}/include


RUN curl -O http://curl.haxx.se/download/curl-7.38.0.tar.gz && \
	tar -xzf curl-7.38.0.tar.gz
RUN cd curl-7.38.0 && ./configure --host=arm-linux-androideabi --disable-shared --enable-static --disable-dependency-tracking --with-zlib=/Android/zlib --without-ca-bundle --without-ca-path --enable-ipv6 --disable-ftp --disable-file --disable-ldap --disable-ldaps --disable-rtsp --disable-proxy --disable-dict --disable-telnet --disable-tftp --disable-pop3 --disable-imap --disable-smtp --disable-gopher --disable-sspi --disable-manual --target=arm-linux-androideabi --build=x86_64-unknown-linux-gnu || cat config.log

# Make curl 

RUN cd curl-7.38.0 && \
	make && \
	ls lib/.libs/ && \
	cp lib/.libs/libcurl.a /Android/output && \
	ls -hs /Android/output && \
	mkdir /output


# ziplib 

RUN curl -O http://www.nih.at/libzip/libzip-0.11.2.tar.gz && \
	tar -xzf libzip-0.11.2.tar.gz && \
	mv libzip-0.11.2 libzip && \
	cd libzip && \
	./configure --help && \
	./configure --enable-static --host=arm-linux-androideabi --target=arm-linux-androideabi && \
	make && \
	ls -hs lib && \
	cp lib/.libs/libzip.a /Android/output && \
	mkdir /Android/output/ziplib && \
	cp lib/*.c /Android/output/ziplib && \
	cp lib/*.h /Android/output/ziplib && \
	cp config.h /Android/output/ziplib


# To get the results run container with output folder
# Example: docker run -v HOSTFOLDER:/output --rm=true IMAGENAME 

ENTRYPOINT cp -r /Android/output/* /output

So this small script creates all your compilation environment and does the job!

Screenshot 2014-10-28 11.56.35

As you can see if something changes this Dockerfile gives you ability to recompile your libs using different NDK or toolchain pretty fast. Now you can think how to expand this solution to make automatic building of your projects on server side (daily builds). Docker also gives some new freedom for distributed architectures. At the moment i look at CoreOs as perfect environment for containers to live in. But this is subject of separate thread.

Please report any mistakes in comments.

BONUS:

# libzip 

RUN curl -O http://www.nih.at/libzip/libzip-0.11.2.tar.gz && \
	tar -xzf libzip-0.11.2.tar.gz && \
	mv libzip-0.11.2 libzip && \
	cd libzip && \
	./configure --enable-static --host=arm-linux-androideabi --target=arm-linux-androideabi && \
	make && \
	cp lib/.libs/libzip.a /Android/output && \
	mkdir /Android/output/ziplib && \
	cp lib/*.c /Android/output/ziplib && \
	cp lib/*.h /Android/output/ziplib && \
	cp config.h /Android/output/ziplib

We can compile some more libs the same way – here is Libzip for Android.

UPDATE (5 July 2015) – OpenSSL + CURL using latest NDK with clang3.6 support 

I just put new Dockerfile on github which contains build section for OpenSSL library (libssl.a / lib crypto.a). So it’s pretty easy to build lib curl with SSL which supports HTTPS protocol. Also Docker file uses latest NDK with clang3.6 as building toolchain. This version is for armv7, but i’m sure it not so complicated to change it to other ARCH.

Some important lines from my current Application.mk / Android.mk files (just for example – yours might be different):

APP_STL := c++_static #gnustl_static  #stlport_static
APP_CPPFLAGS += -fexceptions -std=c++14
APP_CPPFLAGS += -Wno-error=format-security -Wno-extern-c-compat
NDK_TOOLCHAIN_VERSION := clang

LOCAL_C_INCLUDES += $(LOCAL_PATH)/libs/curl/include 
LOCAL_C_INCLUDES += $(LOCAL_PATH)/libs/openssl/include

LOCAL_LDLIBS    := ./libs/libzip.a ./libs/libcurl.a ./libs/libssl.a ./libs/libcrypto.a -lz -llog -landroid -lEGL -lGLESv2 -lOpenSLES -latomic

This configuration also gives ability to use C++14 on Android! 🙂

]]>
https://vitiy.info/dockerfile-example-to-compile-libcurl-for-android-inside-docker-container/feed/ 39