Coding – Victor Laskin's Blog https://vitiy.info Programming, architecture and design (С++, QT, .Net/WPF, Android, iOS, NoSQL, distributed systems, mobile development, image processing, etc...) Thu, 23 Jul 2015 10:08:05 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.2 Immutable serialisable data structures in C++11 https://vitiy.info/immutable-data-and-serialisation-in-cpp11/ https://vitiy.info/immutable-data-and-serialisation-in-cpp11/#comments Mon, 29 Jun 2015 11:18:03 +0000 http://vitiy.info/?p=575 immutablecppcaption

As C++ is very flexible language there are a lot of ways to construct immutable-like data structures from functional programming style. For example, this nice presentation by Kelvin Henney contains one of the ways. Here I will present my way which is slightly different.

At the same time my approach also solves another problem of serialisation of such immutable structures. I use JSON format at the moment, but the approach is not limited to this format only.

Unlike my previous posts I will start from the results this time.

I suppose you are familiar with general concept of immutability. If you are not – please read some minimal introduction to immutable data concept in functional programming.

FIRST PART – USAGE

ATM I have structure declarations simular to this:

class EventData : public IImmutable {
  public:
    const int id;
    const string title;
    const double rating;

    SERIALIZE_JSON(EventData, id, title, rating);
};

using Event = EventData::Ptr;

Data fields of class are declared as const. And that’s what should be done to be sure that data will not be changed. Note, that I don’t change fields to functions, I don’t hide them below some getters or doing some other tricky stuff. I just mark fields const.

Next – SERIALIZE_JSON. This is macro which contains all the magic. Unfortunately we can’t at the moment achieve introspection without macro declaration. So this is black magic again. I promise next post will not contain macros 🙂

The last step is sharing immutable data through smart pointer to decrease data copy overhead introduced by functional-style data processing. And I use such pointers implicitly. For example: plain data object is named as EventData and pointer to such data is named just Event. This is arguable moment – it’s not necessary to follow this notation.

About the price of using shared_ptr there is nice video from NDC conference – The real price of Shared Pointers in C++ by Nicolai M. Josuttis.

Before presenting some usage examples let’s make data a bit more realistic:

class ScheduleItemData : public IImmutable {
     public:
            const time_t start;
            const time_t finish;
            SERIALIZE_JSON(ScheduleItemData, start, finish);
};
        
using ScheduleItem = ScheduleItemData::Ptr;
        
        
class EventData : public IImmutable {
        public:
            const int id;
            const string title;
            const double rating;
            const vector<ScheduleItem> schedule;
            const vector<int> tags;
            
            SERIALIZE_JSON(EventData, id, title, rating, schedule, tags);
};      
        
using Event = EventData::Ptr;

This is description of some event which has id, title, rating, some schedule as pairs of start/finish linux-times, and vector of integer tags. All this just to show nested immutable structures and const vectors as parts of serialisable data.

Note that you can still add some methods into this class. Marking them const will be good idea.

IMMUTABILITY

Ok, it’s time for action! Structure creation is simple:

Event event = EventData(136, "Nice event", 4.88, {ScheduleItemData(1111,2222), ScheduleItemData(3333,4444)}, {45,323,55});

Using new C++ initialisation syntax we can not only form immutable structure the simple way, but also declare all nested collections. All constructors are generated automatically by same black-magic macro.

Important: immutable data does not have empty constructor! You can only create ‘filled’ data state. This is good feature as now it’s very problematic to get corrupted unfully-constructed state. It’s all or nothing. And of course you still could have empty shared_ptr which could contain no immutable data at all.

When you add new field into data structure all places where data was created explicitly will stop to compile. It might seem as bad, but actually this is very good restriction. Now you can’t forget to modify your object construction according new design.

As you can guess all fields could be accessed general way. But any modification will be prevented by compiler.

immutability

To modify immutable data we need to create new copy of data with modified field. So it’s not modification, but the construction of new object. Same macro is generating all such constructors:

Screenshot 2015-06-27 23.09.39

Note: type is auto-derived using decltype from C++11.

To get the idea how to use such immutable data in functional way using C++11 you can read several posts: post1 post2 post3 (and, probably, more are coming).

SERIALIZATION

Just two methods: toJSON() / fromJSON() are enough to handle all serialisation/deserialisation needs:

// serialisation
string json = event->toJSON();

// deserialisation
Event eventCopy = EventData::fromJSON(json);

Output:

{"id":136,"title":"Nice event","rating":4.880000,"schedule":[{"start":1111,"finish":2222},{"start":3333,"finish":4444}],"tags":[45,323,55]}

Sweet and simple. And note that we can (de)serialise nested structures / arrays.

Serialisation part is optional – if you only need immutability you don’t have to include this part. Implementation of immutability is not using any serialisation implicitly.

SECOND PART – IMPLEMENTATION

Under the hood there is fusion of macro-magic and C++11 features like decltype. I’ll try to express the main ideas how it was implemented instead of just copying of whole code. If you not trying to implement the given approach yourself you could even skip this and believe me that it works.

The next part will be a bit ugly. Be sure you are 16+ before reading this.

caution

Also I had some additional limitation which absence could make life a bit more easy – i could not use constexpr. I use immutable data inside cross-platform solutions and one of my targets is Windows desktop. I don’t know why Microsoft Visual Studio 2015 still does not like constexpr but it’s support is not yet completed. So don’t be surprised why i’m using couple of old-school macro technics to make functional stuff work in more comfort way.

At first we need macro which applies macro to each of macro parameter.

#define SERIALIZE_PRIVATE_DUP1(M,NAME,A) M(NAME,A)
#define SERIALIZE_PRIVATE_DUP2(M,NAME,A,B) M(NAME,A) M(NAME,B)
#define SERIALIZE_PRIVATE_DUP3(M,NAME,A,B,C) M(NAME,A) SERIALIZE_PRIVATE_DUP2(M,NAME,B,C)
#define SERIALIZE_PRIVATE_DUP4(M,NAME,A,B,C,D) M(NAME,A) SERIALIZE_PRIVATE_DUP3(M,NAME,B,C,D)
#define SERIALIZE_PRIVATE_DUP5(M,NAME,A,B,C,D,E) M(NAME,A) SERIALIZE_PRIVATE_DUP4(M,NAME,B,C,D,E)
#define SERIALIZE_PRIVATE_DUP6(M,NAME,A,B,C,D,E,F) M(NAME,A) SERIALIZE_PRIVATE_DUP5(M,NAME,B,C,D,E,F)
#define SERIALIZE_PRIVATE_DUP7(M,NAME,A,B,C,D,E,F,G) M(NAME,A) SERIALIZE_PRIVATE_DUP6(M,NAME,B,C,D,E,F,G)
#define SERIALIZE_PRIVATE_DUP8(M,NAME,A,B,C,D,E,F,G,H) M(NAME,A) SERIALIZE_PRIVATE_DUP7(M,NAME,B,C,D,E,F,G,H)
#define SERIALIZE_PRIVATE_DUP9(M,NAME,A,B,C,D,E,F,G,H,I) M(NAME,A) SERIALIZE_PRIVATE_DUP8(M,NAME,B,C,D,E,F,G,H,I)
#define SERIALIZE_PRIVATE_DUP10(ME,NAME,A,B,C,D,E,F,G,H,I,K) ME(NAME,A) SERIALIZE_PRIVATE_DUP9(ME,NAME,B,C,D,E,F,G,H,I,K)
#define SERIALIZE_PRIVATE_DUP11(ME,NAME,A,B,C,D,E,F,G,H,I,K,L) ME(NAME,A) SERIALIZE_PRIVATE_DUP10(ME,NAME,B,C,D,E,F,G,H,I,K,L)
#define SERIALIZE_PRIVATE_DUP12(ME,NAME,A,B,C,D,E,F,G,H,I,K,L,M) ME(NAME,A) SERIALIZE_PRIVATE_DUP11(ME,NAME,B,C,D,E,F,G,H,I,K,L,M)
#define SERIALIZE_PRIVATE_DUP13(ME,NAME,A,B,C,D,E,F,G,H,I,K,L,M,N) ME(NAME,A) SERIALIZE_PRIVATE_DUP12(ME,NAME,B,C,D,E,F,G,H,I,K,L,M,N)
#define SERIALIZE_PRIVATE_DUP14(ME,NAME,A,B,C,D,E,F,G,H,I,K,L,M,N,O) ME(NAME,A) SERIALIZE_PRIVATE_DUP13(ME,NAME,B,C,D,E,F,G,H,I,K,L,M,N,O)
#define SERIALIZE_PRIVATE_DUP15(ME,NAME,A,B,C,D,E,F,G,H,I,K,L,M,N,O,P) ME(NAME,A) SERIALIZE_PRIVATE_DUP14(ME,NAME,B,C,D,E,F,G,H,I,K,L,M,N,O,P)
#define SERIALIZE_PRIVATE_DUP16(ME,NAME,A,B,C,D,E,F,G,H,I,K,L,M,N,O,P,R) ME(NAME,A) SERIALIZE_PRIVATE_DUP15(ME,NAME,B,C,D,E,F,G,H,I,K,L,M,N,O,P,R)
#define SERIALIZE_PRIVATE_DUP17(ME,NAME,A,B,C,D,E,F,G,H,I,K,L,M,N,O,P,R,S) ME(NAME,A) SERIALIZE_PRIVATE_DUP16(ME,NAME,B,C,D,E,F,G,H,I,K,L,M,N,O,P,R,S)
#define SERIALIZE_PRIVATE_DUP18(ME,NAME,A,B,C,D,E,F,G,H,I,K,L,M,N,O,P,R,S,T) ME(NAME,A) SERIALIZE_PRIVATE_DUP17(ME,NAME,B,C,D,E,F,G,H,I,K,L,M,N,O,P,R,S,T)
#define SERIALIZE_PRIVATE_DUP19(ME,NAME,A,B,C,D,E,F,G,H,I,K,L,M,N,O,P,R,S,T,Q) ME(NAME,A) SERIALIZE_PRIVATE_DUP18(ME,NAME,B,C,D,E,F,G,H,I,K,L,M,N,O,P,R,S,T,Q)
#define SERIALIZE_PRIVATE_DUP20(ME,NAME,A,B,C,D,E,F,G,H,I,K,L,M,N,O,P,R,S,T,Q,Y) ME(NAME,A) SERIALIZE_PRIVATE_DUP19(ME,NAME,B,C,D,E,F,G,H,I,K,L,M,N,O,P,R,S,T,Q,Y)


#define SERIALIZE_PRIVATE_EXPAND(x) x
#define SERIALIZE_PRIVATE_DUPCALL(N,M,NAME,...) SERIALIZE_PRIVATE_DUP ## N (M,NAME,__VA_ARGS__)

// counter of macro arguments + actual call
#define SERIALIZE_PRIVATE_VA_NARGS_IMPL(_1,_2,_3,_4,_5,_6,_7,_8,_9,_10, _11,_12,_13,_14,_15,_16,_17,_18,_19,_20, N, ...) N
#define SERIALIZE_PRIVATE_VA_NARGS(...) SERIALIZE_PRIVATE_EXPAND(SERIALIZE_PRIVATE_VA_NARGS_IMPL(__VA_ARGS__, 20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1))
#define SERIALIZE_PRIVATE_VARARG_IMPL2(M,NAME,base, count, ...) SERIALIZE_PRIVATE_EXPAND(base##count(M,NAME,__VA_ARGS__))
#define SERIALIZE_PRIVATE_VARARG_IMPL(M,NAME,base, count, ...) SERIALIZE_PRIVATE_EXPAND(SERIALIZE_PRIVATE_VARARG_IMPL2(M,NAME,base, count, __VA_ARGS__))
#define SERIALIZE_PRIVATE_VARARG(M,NAME,base, ...) SERIALIZE_PRIVATE_EXPAND(SERIALIZE_PRIVATE_VARARG_IMPL(M,NAME, base, SERIALIZE_PRIVATE_VA_NARGS(__VA_ARGS__), __VA_ARGS__))

#define SERIALIZE_PRIVATE_DUPAUTO(M,NAME,...) SERIALIZE_PRIVATE_EXPAND(SERIALIZE_PRIVATE_VARARG(M,NAME,SERIALIZE_PRIVATE_DUP, __VA_ARGS__))

This is very ugly, but it works (clang, gcc, VS). As you might guess this specific version could handle up to 20 arguments only.

I will split the whole macro to blocks and to improve readability I will skip ‘\’ sign here which is at the end of each line in multiline macro. Also in each block ‘helper’ macros will be placed below to show content together.

Main constructor:

// Main constructor 
NAME(SERIALIZE_PRIVATE_DUPAUTO(SERIALIZE_PRIVATE_CTORIMMUTABLEDECL,NAME,__VA_ARGS__) int finisher = 0) :    
SERIALIZE_PRIVATE_DUPAUTO(SERIALIZE_PRIVATE_CTORIMMUTABLEPARAM,NAME,__VA_ARGS__)IImmutable(){}   


#define SERIALIZE_PRIVATE_CTORIMMUTABLEDECL(NAME,VAL) decltype(VAL) VAL,
#define SERIALIZE_PRIVATE_CTORIMMUTABLEPARAM(NAME,VAL) VAL(VAL),

NAME – is name of class. VA_ARGS is the list of all fields. So macro is expanded in something like this – NAME(decltype(a) a, decltype(b) b,int finisher=0) : a(a), b(b), IImmutable(){} . Here IImmutable is some base class (optional), which could be empty. Also I use finisher to solve last comma problem – may be there is some more clean solution for this.

So other part of magic here is using decltype from C++11.

Additional copy constructors:

// copy constructors
         
NAME(NAME&& other) noexcept :   
SERIALIZE_PRIVATE_DUPAUTO(SERIALIZE_PRIVATE_CTORIMMUTABLECOPY,NAME,__VA_ARGS__)IImmutable() 
{}                                                                                                          
NAME(const NAME& other) noexcept :
SERIALIZE_PRIVATE_DUPAUTO(SERIALIZE_PRIVATE_CTORIMMUTABLECOPY,NAME,__VA_ARGS__)IImmutable(){}


#define SERIALIZE_PRIVATE_CTORIMMUTABLECOPY(NAME,VAL) VAL(other.VAL),

Note that we can’t move immutable data. Move semantics modify the source object and immutable objects can’t be modified.

The last constructor is from std::tuple. We need it for modification / serialisation methods.

// Constructor from tuple
NAME(std::tuple< SERIALIZE_PRIVATE_DUPAUTO(SERIALIZE_PRIVATE_CTORIMMUTABLEDECLTYPENONCONST,NAME,__VA_ARGS__) int> vars) : SERIALIZE_PRIVATE_DUPAUTO(SERIALIZE_PRIVATE_CTORFROMTUPLE,NAME,__VA_ARGS__)IImmutable() {}   

#define SERIALIZE_PRIVATE_CTORIMMUTABLEDECLTYPENONCONST(NAME,VAL) typename std::remove_const<decltype(VAL)>::type,
   
#define SERIALIZE_PRIVATE_CTORFROMTUPLE(NAME,VAL) VAL(std::get<SERIALIZE_PRIVATE_GETINDEX(NAME,VAL)>(vars)),

The tricky part here is that to construct from tuple we need to know index of each field inside tuple. To get such index we need C++14’s constexpr with no C++11 limitations (increment inside constexpr). Or we can use __COUNTER__ macro to create additional static fields which contain indexes.

// Index of each field inside class:
static const int _index_offset = __COUNTER__;       
SERIALIZE_PRIVATE_DUPAUTO(SERIALIZE_PRIVATE_FIELDINDEX,NAME,__VA_ARGS__)     
      
#define SERIALIZE_PRIVATE_FIELDINDEX(NAME,VAL) static const int _index_of_##VAL = __COUNTER__ - _index_offset - 1;

Note that __COUNTER__ macro is global so we need to generate additional offset field to store initial value and solve the difference for each index.

Generation of ‘set_’ methods:

// Generation of set_ methods
SERIALIZE_PRIVATE_DUPAUTO(SERIALIZE_PRIVATE_CLONEANDSET2,NAME,__VA_ARGS__) 

// Convert data into std::tuple
std::tuple< SERIALIZE_PRIVATE_DUPAUTO(SERIALIZE_PRIVATE_CTORIMMUTABLEDECLTYPENONCONST,NAME,__VA_ARGS__) int> toTuple() const noexcept {
    return make_tuple(SERIALIZE_PRIVATE_DUPAUTO(SERIALIZE_PRIVATE_CTORIMMUTABLEVAL,NAME,__VA_ARGS__) 0);    
} 

#define SERIALIZE_PRIVATE_CLONEANDSET2(NAME,VAL) NAME::Ptr set_##VAL(decltype(VAL) VAL) const noexcept {   
    auto t = toTuple();
    std::get<SERIALIZE_PRIVATE_GETINDEX(NAME,VAL)>(t) = VAL;
    return std::make_shared<NAME>(NAME(t));                       
}

Unfortunately tuple also has finisher as int type to solve ‘comma’ problem. May be i’ll find a way to make it possible without it.

To make shorter declaration of holding pointers:

// Short smart-pointer declaration
typedef std::shared_ptr<NAME> Ptr;

Additional overload of compare operator:

// compare operator overload
bool operator== (const NAME& other) const noexcept {
    SERIALIZE_PRIVATE_DUPAUTO(SERIALIZE_PRIVATE_COMPAREIMMUTABLE,NAME,__VA_ARGS__) return true; 
    return false;
}

#define SERIALIZE_PRIVATE_COMPAREIMMUTABLE(NAME,VAL) if (other.VAL==VAL)

This overload could be changed according to business logic. For example you could compare only id fields.

Whole serialisation part of macro is quite short:

// JSON serialisation is done using my own old lib 		
string toJSON() const noexcept {
    JSON::MVJSONWriter w;
    w.begin();                        
    SERIALIZE_PRIVATE_DUPAUTO(SERIALIZE_PRIVATE_APPENDTOJSON,NAME,__VA_ARGS__)     
    w.end();                           
    return w.result;              
}                            

static NAME fromJSON(string json)
{                                   
   JSON::MVJSONReader reader(json);            
   return fromJSON(reader.root); 
}                                                                        
               
static NAME fromJSON(JSON::MVJSONNode* node)
{            
    return NAME( make_tuple(SERIALIZE_PRIVATE_DUPAUTO(SERIALIZE_PRIVATE_FROMJSON,NAME,__VA_ARGS__) 0));     
}                                                                                  


#define SERIALIZE_PRIVATE_APPENDTOJSON(NAME,VAL) w.add(#VAL,VAL);
#define SERIALIZE_PRIVATE_FROMJSON(NAME,VAL) node->getValue<decltype(VAL)>(#VAL),

Here I use my old lib for parsing JSON (described here). But you can use any JSON decoder you like. The only restriction is that you have to write wrappers so your lib could read/write values through single entry point. Here I will show an example implementation how to distinguish vector and non vector types.

JSON writer is using simple overloading / template specialisation:

template< typename T >
inline string toString(const T& value);
        
template< typename T >
inline string toString(const vector<T>& value);

Couple of implementations (for example):

template< typename T >
inline string MVJSONWriter::toString(const T& value)
{
        // default overload is for string-like types!
        return "\"" + value + "\"";        
}
   
template< typename T >
inline string MVJSONWriter::toString(const vector<T>& value)
{
        string result = "[";
        for (auto item : value)
            result += ((result != "[") ? "," : "") + item->toJSON();
        result += "]";
        return result;
}
    
template<>
inline string MVJSONWriter::toString(const vector<int>& value)
{
        string result = "[";
        for (auto item : value)
            result += ((result != "[") ? "," : "") + std::to_string(item);
        result += "]";
        return result;
}

So all type variations are hidden inside JSON writing/reading section.

Reader has a bit more complicated form of overloading because we have to overload only output type. To make this possible we have some dummy parameter, std::enable_if and simple type trait for vector detection.

template <typename T> struct is_vector { static const bool value = false; };
template <typename T> struct is_vector< std::vector<T> > { static const bool value = true; };
template <typename T> struct is_vector< const std::vector<T> > { static const bool value = true; };

// inside reader class:
    
template<class T>
inline T getValue(const string& name, typename enable_if<!is_vector<T>::value, T>::type* = nullptr);
    
template<class T>
inline T getValue(const string& name, typename enable_if<is_vector<T>::value, T>::type* = nullptr);

Some overloads:

template<class T>
inline T
MVJSONNode::getValue(const string& name, typename enable_if<!is_vector<T>::value, T>::type*)
{
        MVJSONValue* value = getField(name);
        if (value == NULL) return "";
        return value->stringValue;
}
    
template<class T>
inline T
MVJSONNode::getValue(const string& name, typename enable_if<is_vector<T>::value, T>::type*)
{
        typename std::remove_const<T>::type result;
        MVJSONValue* value = getField(name);
        if (value == NULL) return result;
        for (auto item : value->arrayValue)
        {
            result.push_back(remove_pointer<decltype(std::declval<T>().at(0).get())>::type::fromJSON(item->objValue));
        }
        return result;
}
    
    
template<>
inline const vector<int>
MVJSONNode::getValue(const string& name, typename enable_if<is_vector<const vector<int>>::value, const vector<int>>::type*)
{
        vector<int> result;
        MVJSONValue* value = getField(name);
        if (value == NULL) return result;
        for (auto item : value->arrayValue)
        {
            result.push_back((int)item->intValue);
        }
        return result;
}

So you need to specify overloads for all your trivial types and containers. In practice this is not so compact but anyway you should do it only once.

CONCLUSION

Once again, all this ugly implementation detail is written only once and is located on low level of architecture. Upper business layer just uses this utility as black box and should not be aware of inner implementation.

Anyway the main idea of this post was to show how compact declaration of immutable data could be in C++11, and that you don’t have to write more boilerplate code for each business class declaration.

Any comments are welcome.

UPDATE: I put some source code on github as 2 gists:

My old JSON lib – https://gist.github.com/VictorLaskin/1fb078d7f4ac78857f48

Declaration – https://gist.github.com/VictorLaskin/48d1336e8b6eea16414b

Please, treat this code like just an example.

]]>
https://vitiy.info/immutable-data-and-serialisation-in-cpp11/feed/ 4
Separating constraints, iterations and data (C++11) https://vitiy.info/separating-constraints-iterations-and-data-cpp11/ https://vitiy.info/separating-constraints-iterations-and-data-cpp11/#respond Sun, 17 May 2015 21:37:45 +0000 http://vitiy.info/?p=561 Two recent posts in Bartosz’s programming cafe describe nice application of list monad to solve the following puzzle:

send puzzleEach letter corresponds to single digit. There are a lot of methods to solve this. Bartosz is using list monad which is very simular to list comprehension methods which were described here. While this maybe not the fastest way to solve this specific puzzle, his approach shows the ways to solve large cluster of similar but smaller problems which we meet at basic “production” level. SEND+MORE problem may be is not so good example to show power of list monad because of one very important problem i want to discuss in this post.

Let’s rephrase the puzzle – we have 8 different variables and have find all combinations which satisfy some constraints.

Straightforward solution – form all possible combinations of variables and filter them using constraint conditions. To form such combinations we use general iteration over the list of possible values.

The problem: when the list of values is not small enough or count of variables is greater than 3 we could face performance problem as iterations become too long.

Size of SEND+MORE puzzle is close enough to meet this problem but still modern cpu could handle it straightforward way.

SLIGHTLY DIFFERENT WAY

The main idea is to separate iterations, data and constraints. And the most important: we need to split one global constraint into smaller constraints and apply them as early as possible.

And while doing that i want to maintain the simplicity of Bartosz’s solution.

To make it all work i will use some tools from my previous post . Main parts are currying of functions and functional piping.

DATA:

// 
using sInt = std::shared_ptr<int>;

// the list of possible values
vector<int> digits = {0,1,2,3,4,5,6,7,8,9};

// variables to find
sInt s,e,n,d,m,o,r,y;
        
// additional vars (described further)
sInt r0,r1,r2,r3;

// fill variables (0)
for_each_argument_reference([](sInt& i){ make(i,0); }, s,e,n,d,m,o,r,y,r0,r1,r2,r3);

I use shared_ptr to access data here – that’s not important in this particular example. Even raw pointers could show the idea. The important part is the list of possible values called digits, and pointers to all variables.

Next – let’s define CONSTRAINTS:

// No constraint
auto any = to_fn([](sInt x){ return true; });

Constraint is a function which returns “true” if given value has passed the condition. So constraint any gives green light to every integer. to_fn function here is just converting lambda into std::function.

// This is how we add numbers digit by digit:
// 0  + d + e = y + r1 * 10
// r1 + n + r = e + r2 * 10
// r2 + e + o = n + r3 * 10
// r3 + s + m = o + m * 10

auto fn_constraint = to_fn([](sInt r0, sInt x, sInt y, sInt z, sInt r){
     // r0 + x + y = x + r * 10
     *r = *r0 + *x + *y - *z;
     if (*r == 0) return true;
     if (*r == 10) { *r = 1; return true; };
     return false;
});
    
const auto constraint = fn_to_universal(fn_constraint);

Instead of trying to invent some tricky constrains we go very simple and logical way – our constraint just defines how we add decimal numbers. Nothing more – nothing else. r0,r1,r2,r3 – are numbers which go on next column during  addition.

Only one ‘not so nice’ step here is setting r through pointer.  This is done to be able to use it during the following more deep constraints.

After definition i’m wrapping function into universal class which could handle currying and piping – see this post for details.

The last column of digits is an exception – so we have to define separate constraint for it:

auto fn_last_constraint = to_fn([](sInt r0, sInt x, sInt y, sInt z){
    // r0 + x + y = x + y * 10
    return (*y != 0) && (*r0 + *x + *y == *z + *y * 10);
});
const auto last_constraint = fn_to_universal(fn_last_constraint);

Note that we are also checking for first number to be non zero.

So finally instead of one global constraint we have 4 smaller constrains and could apply them sooner to decrease amount of iterations.

ITERATIONS

Functional iterator is simple:

void fn_pick(sInt x, function<bool(sInt)> constraint, function<void(vector<int>)> process, vector<int> list)
{
    for (auto item : list)
    {
        *x = item;
        if (constraint(x))
            process(list | filter >> [&](int el){ return (el != item); });
    }
}
  
fn_make_universal(pick, fn_pick);

This function is just picking all possible values from the list, apply constraint and if it was positive check, calls process method for reduced list of values (which does not contain picked value).

The last piece is the function to print the result:

auto printResult = [&](vector<int> list){ printf("RESULT %i%i%i%i + %i%i%i%i = %i%i%i%i%i \n", *s,*e,*n,*d,*m,*o,*r,*e,*m,*o,*n,*e,*y); };

FINALLY

digits | pick << d << any <<
        (pick << e << any <<
        (pick << y << (constraint << r0 << d << e >> r1) <<
        (pick << n << any <<
        (pick << r << (constraint << r1 << n >> e >> r2) <<
        (pick << o << (constraint << r2 << e >> n >> r3) <<
        (pick << s << any <<
        (pick << m << (last_constraint << r3 << s >> o )
        << printResult )))))));

// RESULT 9567 + 1085 = 10652

Sorry that i’m using my ‘<<‘ notation for currying here – as this might be not ideal solution. I hope this will not prevent you to understand the idea of segregation. Of cause operation overloading could be changed to some another notation. Note that i use left and right currying together inside constraints.

This is compact enough to show the main idea – decomposing iterations and constraints.

My debug version is solving this puzle in 7ms.

PS. What I don’t like about this solution is using pointers too much. But we could change the design to pass data along functional chain without using pointers, but this could make the solution a bit more complicated. May be i will fix it later. Also i’m looking for the way to get rid of  ‘)))))))’ stuff.

PS2: Whole puzzle solution together:

using sInt = std::shared_ptr<int>;
    
// ITERATIONS
void fn_pick(sInt x, function<bool(sInt)> constraint, function<void(vector<int>)> process, vector<int> list){
    for (auto item : list)
    {
        *x = item;
        if (constraint(x))
           process(list | filter >> [&](int el){ return (el != item); });
    }
}

fn_make_universal(pick, fn_pick);

// DATA
vector<int> digits = {0,1,2,3,4,5,6,7,8,9};
sInt s,e,n,d,m,o,r,y;
sInt r0,r1,r2,r3;
for_each_argument_reference([](sInt& i){ make(i,0); }, s,e,n,d,m,o,r,y,r0,r1,r2,r3);

// CONSTRAINTS
auto any = to_fn([](sInt x){ return true; });

auto fn_constraint = to_fn([](sInt r0, sInt x, sInt y, sInt z, sInt r){
    // r0 + x + y = x + r * 10
    *r = *r0 + *x + *y - *z;
    if (*r == 0) return true;
    if (*r == 10) { *r = 1; return true; };
    return false;
});
const auto constraint = fn_to_universal(fn_constraint);

auto fn_last_constraint = to_fn([](sInt r0, sInt x, sInt y, sInt z){
     // r0 + x + y = x + y * 10
     return (*y != 0) && (*r0 + *x + *y == *z + *y * 10);
});
const auto last_constraint = fn_to_universal(fn_last_constraint);

// print out the result
auto printResult = [&](vector<int> list){ printf("RESULT %i%i%i%i + %i%i%i%i = %i%i%i%i%i \n", *s,*e,*n,*d,*m,*o,*r,*e,*m,*o,*n,*e,*y); };
     
// ROCK&ROLL
digits | pick << d << any <<
        (pick << e << any <<
        (pick << y << (constraint << r0 << d << e >> r1) <<
        (pick << n << any <<
        (pick << r << (constraint << r1 << n >> e >> r2) <<
        (pick << o << (constraint << r2 << e >> n >> r3) <<
        (pick << s << any <<
        (pick << m << (last_constraint << r3 << s >> o )
      << printResult )))))));

PS3. Bartosz’s programming cafe is a very good place to visit.

 

 

 

 

 

]]>
https://vitiy.info/separating-constraints-iterations-and-data-cpp11/feed/ 0
C++11: Implementation of list comprehension in SQL-like form https://vitiy.info/cpp11-writing-list-comprehension-in-form-of-sql/ https://vitiy.info/cpp11-writing-list-comprehension-in-form-of-sql/#comments Mon, 16 Feb 2015 18:29:31 +0000 http://vitiy.info/?p=490 list comprehension

List comprehension in functional languages is the name of list constructor syntax, which is similar to set-builder notation from math.

What are the benefits of using list comprehension? One is readability, and the other one is the fact of decoupling iteration from actual construction. We could even hide parallel execution under the hood of list comprehension. Also by adding additional options to such declaration we could make the list construction a lot shorter.

If we look closer at list comprehension’s syntax, it will remind of one another very familiar thing – SQL select! Output expression, input set, predicates are equivalent to select, from, where sequence (of course not exactly, but they are very alike). Ok, let’s implement such syntax sugar using C++11 (without boost and LINQ-like libs).

You can skip implementation details and scroll to examples to see the result first.

SHORT IMPLEMENTATION

Let’s define whole operation as simple function which will produce vector<> of something.

template <typename R, typename... Args, typename... Sources, typename... Options>
vector<R> select(std::function<R(Args...)> f,         ///< output expression
   const std::tuple<Sources...>& sources,             ///< list of sources
   const std::function<bool(Args...)>& filter,        ///< composition of filters
   const Options&... options                          ///< other options and flags
) { ... }

Feel the power of variadic templates. First argument is simple function and its declaration defines output type and input variables. The next argument is tuple of source containers and their number should be equal to number of output expression’s arguments. Third argument is composition of ‘where’ filters. The rest parameters are optional flags.

You would expect a lot of code as implementation but the core will be quite compact.

Let’s write main processing cycle:

template<std::size_t I = 0, typename FuncT, typename... Tp, typename... Args>
inline typename std::enable_if<I == sizeof...(Tp), bool>::type
for_each_in_sources(const std::tuple<Tp...> &, FuncT& f, Args&... args)
{
     return f(args...);
}
    
template<std::size_t I = 0, typename FuncT, typename... Tp, typename... Args>
inline typename std::enable_if<I < sizeof...(Tp), bool>::type
    for_each_in_sources(const std::tuple<Tp...>& t, FuncT& f, Args&... args)
{
    bool isFinished;
    for(auto& element : std::get<I>(t))
    {
        isFinished = for_each_in_sources<I + 1, FuncT, Tp...>(t, f, args..., element);
        if (isFinished) break;
    }
    return isFinished;
}

// .... inside .....

vector<R> result;
int count = 0;
auto process = [&](const Args&... args){
    if (filter(args...))
    {
        result.push_back(f(args...));
        count++;
    }
    return (count == limit); // isFinished
};
        
for_each_in_sources(sources, process);

This is quite strait-forward approach – we will go through all combinations of input elements performing simple ranged for cycles. So when using list comprehension the one should keep in mind that providing too vast input arrays will result in too slow computation time. For example, if we have 3 sources, complexity will be O(n^3). So i decided to add some minor protection here – limit of requested data (it is like LIMIT instruction inside SQL-request syntax). When the limit is reached – cycles will be aborted (but this anyway will not solve all possible conditions).

As for main working part – here we use template recursion over tuple of sources, performing a cycle on each step. If you are not familiar with such approach here is working sample of simple iteration over tuple elements.

All other stuff below is optional!

First additional thing to add is set of optional flags to setup query limit, sorting options and so on. For implementation of simple bool flags we would just use class enum from C++11, but if we want to set complex options (like manual sorting with compare function, etc) we could go with the following approach:

class SelectOption {
public:
    virtual bool imPolymorphic() { return true; }
};
    
class SelectOptionLimit : public SelectOption {
public:
    int limit;
    SelectOptionLimit(int limit) : limit(limit) {}
};
    
class SelectOptionSort : public SelectOption {};
class SelectOptionDistict : public SelectOption {};

// ..... INSIDE select() .....

int limit = -1;
bool isDistinct = false;
bool isSorted = false;
        
for_each_argument([&](const SelectOption& option){
    if (auto opt = dynamic_cast<const SelectOptionLimit*>(&option)) {  limit = opt->limit; };
    if (dynamic_cast<const SelectOptionSort*>(&option)) { isSorted = true; };
    if (dynamic_cast<const SelectOptionDistict*>(&option)) { isDistinct = true; };
}, options...);

// .... HERE IS MAIN CYCLE ....

// sort results
if ((isDistinct) || (isSorted))
    std::sort(result.begin(), result.end());
        
// remove duplicates
if (isDistinct) {
    auto last = std::unique(result.begin(), result.end());
    result.erase(last, result.end());
}

Now our select has 3 additional options – limit, district and sort. Of course, we assume our result items to be comparable if we expect sort to work.

We can add additional declaration to be able to call select function without filters when we do not need them:

template <typename R, typename... Args, typename... Sources, typename... Options>
inline vector<R> select(std::function<R(Args...)> f,
                     const std::tuple<Sources...>& sources,
                     const Options&... options)
{
    return select(f, sources, to_fn([](Args... args){ return true; }), options...);
}

And to have fun let’s do some evil thing

#define SELECT(X) select(to_fn(X),
#define FROM(...) std::make_tuple(__VA_ARGS__)
#define WHERE(...) ,fn_logic_and(__VA_ARGS__)
#define SORT ,SelectOptionSort()
#define DISTINCT ,SelectOptionDistict()
#define LIMIT(X) ,SelectOptionLimit(X))
#define NOLIMIT LIMIT(-1)

Aside from quite obvious shortcuts for function calls we are making two important conversions here. First – we are converting first function into std::function using to_fn(). This is equivalent to to_function() from my previous post about functional decomposition in C++ and AOP. In short, it’s equal to casting to std::function<..> with automatic detection of arguments types and return type.

Second thing is more interesting – fn_logic_and(…). This method is making one function from bunch of filter functions (to be provided as third argument into main list comprehension function). This is not classic composition of functions as we don’t have chained calls, but instead it’s like logical operation over list of functions. For implementation of fn_logic_and() look at post bottom. This function is optional – you can pass the filter functions as tuple and iterate over it or store them as vector.

Note that if we don’t want to make copies of sources (as std::make_tuple does it) we can switch to std::forward_as_tuple as the solution for passing tuple of references. But this might not be so safe. Anyway the approach itself does not require to copy the sources – you can use references or smart-pointers to pass sources (after minor modifications).

As final touch before getting something that works, we need source class for range of natural numbers (as the majority of list comprehension examples use integers). To use ranged for over this source, we need to implement Iterator inside.

class Naturals
{
        int min;
        int max;
    public:
        Naturals() : min(1),max(1000) {}
        Naturals(int min, int max) : min(min),max(max) {}
        int at(int i) const { return i + min; } ;
        int size() const { return max - min + 1; } ;
        
        class Iterator {
            int position;
        public:
            Iterator(int _position):position(_position) {}
            int& operator*() { return position; }
            Iterator& operator++() { ++position; return *this; }
            bool operator!=(const Iterator& it) const { return position != it.position; }
        };
        
        Iterator begin() const { return { min }; }
        Iterator end()   const { return { max }; }
};

Note that by default the range is only [1..1000].

Ok, let’s have some fun.

EXAMPLES OF LIST COMPREHENSION 

Let’s start from something trivial:

SELECT([](int x){ return x; }) FROM(Naturals()) LIMIT(10);

// List: 1 2 3 4 5 6 7 8 9 10

We can implement map operation as list comprehension:

auto ints = {11,2,1,5,6,7};
SELECT([](int x){ return x + 5; }) FROM(ints) NOLIMIT;

// List: 16 7 6 10 11 12

SELECT([](int x)->string { return x % 2 == 0 ? "BOOM" : "BANG"; }) FROM(ints) NOLIMIT;

// List: 'BANG' 'BOOM' 'BANG' 'BANG' 'BOOM' 'BANG'

Ok, time for filters. Let’s find matching elements inside two sets of integers (and sort them):

auto ints = {11,2,1,5,6,7};
auto ints2 = {3,4,5,7,8,11};
       
result = SELECT([](int x, int y){ return x; })
         FROM(ints, ints2)
         WHERE([](int x, int y){ return (x == y); }) SORT NOLIMIT;

// List: 5 7 11

Another example using DISTINCT:

SELECT([](int x, int y){ return x + y; }) FROM(Naturals(), Naturals()) WHERE([](int x, int y){ return (x*x + y*y < 25); }) DISTINCT LIMIT(10);

// List: 2 3 4 5 6

Next example from Erlang guide on list comprehension – pythagorean triplets:

int N = 50;
SELECT([&](int x, int y, int z){ return make_tuple(x,y,z); })
FROM(Naturals(1,N), Naturals(1,N), Naturals(1,N))
WHERE([&](int x, int y, int z){ return (x+y+z <= N); },
      [&](int x, int y, int z){ return (x*x + y*y == z*z); }) LIMIT(N);

Result:

-> 3 4 5
 -> 4 3 5
 -> 5 12 13
 -> 6 8 10
 -> 8 6 10
 -> 8 15 17
 -> 9 12 15
 -> 12 5 13
 -> 12 9 15
 -> 12 16 20
 -> 15 8 17
 -> 16 12 20

We could run select using custom data classes ( select id and name from users where id in [….] )

vector<User> users {make<User>(1, "John", 0), make<User>(2, "Bob", 1), make<User>(3, "Max", 1)};
auto ints = {11,2,1,5,6,7};

SELECT([](int x, User u){ return make_pair(u->id, u->name); })
FROM(ints, users)
WHERE([](int x, User u){ return (u->id == x); }) LIMIT(10);

//  -> 2, Bob
//  -> 1, John

We also can perform nested selects:

SELECT([](int x, int y){ return x*y; }) FROM(
  SELECT([](int x){return x;}) FROM(Naturals()) WHERE([](int x){ return x % 2 == 0; }) NOLIMIT,
  SELECT([](int x){return x;}) FROM(Naturals()) WHERE([](int x){ return x % 2 == 1; }) NOLIMIT
) SORT LIMIT(10);

// Result: 2 6 10 14 18 22 26 30 34 38

Sure in haskell this will be a lot more compact, but, as you can see, it’s not as scary as expected from C++.

LAZY EVALUATION

If we want to make whole select lazy this can be done pretty easily:

template <typename R, typename... Args, typename... Sources, typename... Options>
std::function<vector<R>()> select_lazy(
   const std::function<R(Args...)>& f,         ///< output expression
   const std::tuple<Sources...>& sources,      ///< list of sources
   const std::function<bool(Args...)>& filter, ///< composition of filters
   const Options&... options                   ///< other options and flags
)
{
    return to_fn([=](){ return select(f, sources, filter, options...); });
}
    
#define LAZYSELECT(X) select_lazy(to_fn(X),

auto get = LAZYSELECT([](int& x){ return x; }) FROM(ints) WHERE([](int& x){ return x % 2 == 0; }) LIMIT(20);
LOG << "Evaluate it later... " << NL;
auto result = get(); // evaluation

Notice that function contains capture by value inside. This is optional choice. Anyway, if your ‘where‘ filters contain capture by reference of some additional parameters this could lead to undefined behaviour when is used out of scope. But this is general practice with lambdas.

There is nice post from Bartosz here about laziness, list comprehension and C++’s way to implement it from different angle. Here is described more strait-forward and thus more simple and clear way (it does not mean it’s better). We don’t wrap data and sources into something and so on, but implementation of laziness from Bartosz has one nice feature – it gives you ability to request results of query part by part as true lazy evaluation should do. Let’s add this feature into current scheme.

Let’s add new option, which will contain information about current request state. For more compact example it will just contain array of integer indexes (you can change it to Iterators if you want).

class SelectContinuation : public SelectOption {
public:
    vector<int> indexes;
};

When this option is provided, iterations over tuple of sources should be slightly altered:

template<std::size_t I = 0, typename FuncT, typename... Tp, typename... Args>
inline typename std::enable_if<I == sizeof...(Tp), bool>::type
for_each_in_sources_indexed(const std::tuple<Tp...> &, FuncT& f, vector<int>& indexes, Args&&... args)
{
     return f(args...);
}
    
template<std::size_t I = 0, typename FuncT, typename... Tp, typename... Args>
inline typename std::enable_if<I < sizeof...(Tp), bool>::type
for_each_in_sources_indexed(const std::tuple<Tp...>& t, FuncT& f, vector<int>& indexes, Args&&... args)
{
     bool isFinished;
     int size = std::get<I>(t).size();
     int i = indexes[I];
     if (I != 0) if (i >= size) i = 0;
     if (i >= size) return false;
     for (; i < size; i++)
     {
          isFinished = for_each_in_sources_indexed<I + 1, FuncT, Tp...>(t, f, indexes, args..., std::get<I>(t).at(i));
          if (isFinished) break;
     }
     indexes[I] = ++i;
     return isFinished;
}

We just changed cycle to indexed version. Function stores last iteration positions inside indexes[] array and starts from it when iteration is called again. One of options would be encapsulating this state inside select functor, but here we store state it inside outer option object. This is just one of possible ways.

Let’s change Pythagorean Triplets example… (this is not final version)

// just for beauty
#define LAZY(X) ,(X)
 
SelectContinuation state;
auto lazypyth = LAZYSELECT([&](int x, int y, int z){ return make_tuple(x,y,z); })
  FROM(range, range, range)
  WHERE([&](int x, int y, int z){ return (x+y+z <= N) && (z > y) && (y > x); },
        [&](int x, int y, int z){ return (x*x + y*y == z*z); })
  LAZY(state) LIMIT(5);
                
// Request 5 items...
auto pyth = lazypyth(); 

// Output:  -> 3 4 5
//  -> 5 12 13
//  -> 6 8 10
//  -> 7 24 25
//  -> 8 15 17

// Request 5 more items...
pyth = lazypyth();

// Result:  -> 9 40 41
//  -> 10 24 26
//  -> 11 60 61
//  -> 12 16 20
//  -> 12 35 37

Looks nice? But there is one major problem. Lazy evaluation could work with very large ranges of input data (even infinite ranges are fine in Haskell). If we provide Naturals(1,100000000) as input for our 3 variables in current example, the cycles will run forever… What to do to solve this? Many ways:

  • we could recreate input sources before each iteration depending on current indexes of upper cycles
  • we could wrap sources into classes which will provide method narrowing iteration range before each cycle
  • we could provide additional restricting functions as options which will break inner cycles conditionally

The most flexible way is providing functions which will produce sources instead of forwarding sources themselves (as expected in functional world…). There is no problem to change implementation to handle such input, but i want add one more feature – i want to support both object sources and functions which will produces sources simultaneously. And even to be able to use them mixed in single request. And again, to implement this we could add relatively tiny layer of code (for C++):

Inside main cycles we add additional call to getSource() function to provide the source on each iteration. So now it looks like:

template<std::size_t I = 0, typename FuncT, typename... Tp, typename... Args>
inline typename std::enable_if< I < sizeof...(Tp), bool>::type
for_each_in_sources_indexed(const std::tuple<Tp...>& t, FuncT& f, vector<int>& indexes, Args&&... args)
{
    bool isFinished;
    auto&& src = getSource(std::get<I>(t), args...);
    int size = src.size();
    int i = indexes[I];
    if (I != 0) if (i >= size) i = 0;
    if (i >= size) return false;
    for (; i < size; i++)
    {
         isFinished = for_each_in_sources_indexed<I + 1, FuncT, Tp...>(t, f, indexes, args..., src.at(i));
         if (isFinished) break;
    }
    indexes[I] = ++i;
    return isFinished;
}

And getSource() has conditional implementation depending on which source type it gets. The main problem here is how to identify that provided class is lambda function?

To do this we can use std::enable_if in couple with std::declval and std::is_object:

template <typename Src, typename... Args>
inline typename std::enable_if< std::is_object<decltype(std::declval<Src>().begin()) >::value, Src&&>::type
getSource(Src && src, Args&&... args) {
    return std::forward<Src&&>(src);
}
    
    
template <typename F, typename... Args, typename FRes = decltype(std::declval<F>()(std::declval<Args>()...))>
inline typename std::enable_if<std::is_object< decltype(std::declval<F>()(std::declval<Args>()...)) >::value, FRes  >::type
getSource(F && f, Args&&... args)
{
    return std::forward<FRes>(f(std::forward<Args>(args)...));
}

First variant checks that object has begin() method. If it has it – it’s raw object. Second variant just attempts to get result type of possible function call. Luckily we can just pass the list of arguments as variadic template.

Let’s test it:

auto lazypyth = LAZYSELECT([&](int x, int y, int z){ return make_tuple(z,y,x); })
    FROM(Naturals(1,100000000), 
         [](int x){ return Naturals(1,x); }, 
         [](int x,int y){ return Naturals(1,y); })
    WHERE([&](int x, int y, int z){  return x*x == y*y + z*z; })
    LAZY(state) LIMIT(5);

lazypyth();

// Result:
// -> 3 4 5
// -> 6 8 10
// -> 5 12 13
// -> 9 12 15
// -> 8 15 17

lazypyth();

// Result:
// -> 12 16 20
// -> 15 20 25
// -> 7 24 25
// -> 10 24 26
// -> 20 21 29

We had to change the order of parameters and now it works fast enough (enough for not optimised example code).

Note: the same way you can extend select to support different types of sources. For example, wrapped inside shared_ptr<>, etc.

Also note that after all modifications this is still just simple function. All macros are optional and you could use select() as normal function.

HOW ABOUT CONCURRENCY?

Making acceptable assumptions that our sources are safe to iterate over at the same time, there will be no problem to extend our sample to support concurrent execution! We can slightly modify continuation option, which we used for laziness by adding special constructors. This will allow us to manually set iteration start indexes as job parameters.

class SelectContinuation : public SelectOption {
public:
    vector<int> indexes;
    SelectContinuation(){}
    template <typename... Args>
    SelectContinuation(Args... args) : indexes{ args... } { }
};

To be able to call select function in concurrent manner let’s add alias:

template <typename R, typename... Args, typename... Sources, typename... Options>
std::function<vector<R>(const SelectContinuation& job)> select_concurrent(
    const std::function<R(Args...)>& f,          ///< output expression
    const std::tuple<Sources...>& sources,       ///< list of sources
    const std::function<bool(Args...)>& filter,  ///< composition of filters
    const Options&... options                    ///< other options and flags
)
{
    return to_fn([=](const SelectContinuation& job){ return select(f, sources, filter, job, options...); });
}

// optional
#define CONCURRENTSELECT(X) select_concurrent(to_fn(X),

That’s all! Next is the example how run select concurrently:

// concurrent test
SelectContinuation job1(0);
SelectContinuation job2(100);
   
auto get = CONCURRENTSELECT([](int& x){ return x; }) FROM(Naturals(1,200)) WHERE([](int& x){ return x % 2 == 0; }) LIMIT(10);
 
// two async threads       
auto part1 = std::async(std::launch::async, [&](){ return get(job1); });
auto part2 = std::async(std::launch::async, [&](){ return get(job2); });
part1.wait();
part2.wait();
        
print(part1.get());
print(part2.get());

// Result: 
// 10 numbers: 2 4 6 8 10 12 14 16 18 20 
// 10 numbers: 102 104 106 108 110 112 114 116 118 120

This approach can be extended to support additional border conditions and automatic split into parts.

You also could implement implicit concurrency inside main iterations of list comprehension by splitting upper cycle into several equal parts.

CONCLUSION

It’s possible to implement list comprehension in C++11 without using LINQ or similar libs. It will not be haskell-short but still you can gain some profits from it, as more readable and maintainable code in some cases and so on. Combined with custom self-defined additional options you can make such declaration even more profitable.

Lazy and concurrent evaluation could be added with no big effort.

But be aware of inner implementation – as there can be cases when naive list comprehension can lead to high cpu-consumption. Also in some cases list comprehension can have longer declaration than other ways of functional processing possible in modern C++ (like the one discussed here).

If you don’t like SQL syntax or macros you can still use list comprehension as function without any macro. If you invent more short names for options, add short function from() which will just make tuple from arguments, you can already use it like:

// without macro
select(to_fn([](int x,int y){ return x + y; }), from(ints, ints), where([](int x, int y){ return x+y<10; }), limit(10));

and it’s also possible to get rid of to_fn() there.

Working example can be found on ideone / gist

APPENDIX: LOGIC OPERATIONS OVER FUNCTION LIST (OPTIONAL)

We could pass filters just as vector of functions slightly changing ‘checking’ part of cycle, so the next part is totally optional. But the following code might be useful somewhere else.

This part contains two functions: fn_compose(…) and fn_logic_add(…).

The first one is making normal mathematical composition: if you have input functions f1(),f2(),f3() it will return f(..) = f3(f2(f1(..))).

fn_logic_add will return f(..) = f1(..) && f2(..) && f3(..) as single std::function. Of cause, all input functions should have the same list of arguments and bool return type. This can be considered as logical and operation over the list of functions.

template <typename F1, typename F2>
struct function_composition_traits : public function_composition_traits<decltype(&F1::operator()), decltype(&F2::operator())>
{};
    
template <typename ClassType1, typename ReturnType1, typename... Args1, typename ClassType2, typename ReturnType2, typename... Args2>
struct function_composition_traits<ReturnType1(ClassType1::*)(Args1...) const, ReturnType2(ClassType2::*)(Args2...) const>
{
        typedef std::function<ReturnType2(Args1...)> composition;
        typedef std::function<bool(Args1...)> boolOperation;

        
        template <typename Func1, typename Func2>
        inline static composition compose(const Func1& f1, const Func2& f2) {
            return [f1,f2](Args1... args) -> ReturnType2 { return f2(f1(std::forward<Args1>(args)...)); };
        }
        
        template <typename Func1, typename Func2>
        inline static boolOperation logic_and(const Func1& f1, const Func2& f2) {
            return [f1,f2](Args1... args) -> bool { return f1(std::forward<Args1>(args)...) && f2(std::forward<Args1>(args)...); };
        }
        
};
    
// fn_compose

template <typename F1, typename F2>
typename function_composition_traits<F1,F2>::composition fn_compose(const F1& lambda1,const F2& lambda2)
{
        return function_composition_traits<F1,F2>::template compose<F1,F2>(lambda1, lambda2);
}
    
template <typename F, typename... Fs>
auto fn_compose(F f, Fs... fs) -> decltype(fn_compose(f, fn_compose(fs...)))
{
    return fn_compose(f, fn_compose(std::forward<Fs>(fs)...));
}

// fn_logic_and    

template <typename F1, typename F2>
typename function_composition_traits<F1,F2>::boolOperation fn_logic_and(const F1& lambda1,const F2& lambda2)
{
    return function_composition_traits<F1,F2>::template logic_and<F1,F2>(lambda1, lambda2);
}
    
template <typename F, typename... Fs>
auto fn_logic_and(F f, Fs... fs) -> decltype(fn_logic_and(f, fn_logic_and(fs...)))
{
    return fn_logic_and(f, fn_logic_and(std::forward<Fs>(fs)...));
}
    
template <typename F>
auto fn_logic_and(F f) -> decltype(to_fn(f))
{
    return to_fn(f);
}

This can be extended to support wide range of other operations over the list of functors. As soon as i expand it to size of some considerable value i will publish this part coupled with other additional handy conversion functions as tiny lib or something on github.

Any thoughts and corrections are welcome at comments. 

]]>
https://vitiy.info/cpp11-writing-list-comprehension-in-form-of-sql/feed/ 4
C++11 functional decomposition – easy way to do AOP https://vitiy.info/c11-functional-decomposition-easy-way-to-do-aop/ https://vitiy.info/c11-functional-decomposition-easy-way-to-do-aop/#comments Tue, 03 Feb 2015 11:41:31 +0000 http://vitiy.info/?p=461

This post is about making functional decomposition from perspective of Aspect Oriented Programming using C++11. If you are not familiar with ideas of AOP don’t be afraid – it’s rather simple concept, and by the end of this post you will understand the benefits of it.

You also can treat this post just as example how to use high-order functions in C++11.

In short – AOP tries to perform decomposition of every business function into orthogonal parts called aspects such as security, logging, error handling, etc. The separation of crosscutting concerns. It looks like:

Old picture about AOP

Old picture about AOP

Since C++11 supports high-order functions now we can implement factorization without any additional tools and frameworks (like PostSharp for C#).

You can scroll down to ‘what for’ chapter to check out the result to get more motivated.

PART 1 – TRIVIAL SAMPLE

Let’s start from something simple – one aspect and one function.

Here is simple lambda with trivial computation inside:

auto plus = [](int a, int b) { LOG << a + b << NL; };

I want to add some logging before and after computation. Instead of just adding this boilerplate code into function body let’s go other way. In C++11 we just can write high-order function which will take function as argument and return new function as result:

template <typename ...Args>
std::function<void(Args...)> wrapLog(std::function<void(Args...)> f)
{
    return [f](Args... args){
        LOG << "start" << NL;
        f(args...);
        LOG << "finish" << NL;
    };
}

Here we used std::function, variadic templates and lambda as result. (LOG, NL – my own logging stream and you can just change it with std::cout , std::endl or your another logging lib).

As i hoped to achieve the most simple and compact solution, i expected to use it like this:

auto loggedPlus = wrapLog(plus);

Unfortunately this will not compile. ‘no matching function to call ….’ The reason is that lambda is not std::function and automatic type conversion can’t be done. Of cause we can write something like this:

auto loggedPlus = wrapLog(static_cast<std::function<void(int,int)>>(plus));

This line will compile, but this is ugly… I hope cpp committee will fix this casting issue. Meanwhile, the best solution i found so far is the following:

template <typename Function>
struct function_traits
: public function_traits<decltype(&Function::operator())>
{};
    
template <typename ClassType, typename ReturnType, typename... Args>
struct function_traits<ReturnType(ClassType::*)(Args...) const>
{
    typedef ReturnType (*pointer)(Args...);
    typedef std::function<ReturnType(Args...)> function;
};
    
template <typename Function>
typename function_traits<Function>::function
to_function (Function& lambda)
{
    return typename function_traits<Function>::function(lambda);
}

This code is using type traits to convert anonymous lambda into std::function of same type. We can use it like this:

auto loggedPlus = wrapLog(to_function(plus));

Not perfect but much better. Finally we can call functional composition and get the result.

loggedPlus(2,3);

// Result:
// start
// 5
// finish

Note: if we had declared aspect function without variadic template we could compose functions without to_function() conversion, but this would kill the benefit from writing universal aspects discussed further.

PART 2 – REALISTIC EXAMPLE

Introduction is over, let’s start some more real-life coding here. Let’s assume we want to find some user inside database by id. And while doing that we also want log the process duration, check that requesting party is authorised to perform such request (security check), check for database request fail, and, finally, check in local cache for instant results.

And one more thing – i don’t want to rewrite such additional aspects for every function type. So let’s write them using variadic templates and get as universal methods as possible.

Ok, let’s start. I will create some dummy implementation for additional classes like User, etc. Such classes are only for example and actual production classes might be completely different, like user id should not be int, etc.

Sample User class as immutable data:

// Simple immutable data
class UserData {
public:
    const int id;
    const string name;
    UserData(int id, string name) : id(id), name(name) {}
};
    
// Shared pointer to immutable data
using User = std::shared_ptr<UserData>;

Let’s emulate database as simple vector of users and create one method to work with it (find user by id):

vector<User> users {make<User>(1, "John"), make<User>(2, "Bob"), make<User>(3, "Max")};

auto findUser = [&users](int id) -> Maybe<User> {
    for (User user : users) {
        if (user->id == id)
            return user;
    }
    return nullptr;
};

make<> here is just shortcut for make_shared<>, nothing special.

Maybe<> monad

You, probably, noticed that return type of request function contains something called Maybe<T>. This class is inspired by Haskell maybe monad, with one major addition. Instead of just saving Nothing state and Content state, it also might contain Error state.

At first, here is sample type for error description:

/// Error type - int code + description
class Error {
public:
    Error(int code, string message) : code(code), message(message) {}
    Error(const Error& e) : code(e.code), message(e.message) {}

    const int code;
    const string message;
};

Here is minimalistic implementation of Maybe:

template < typename T >
class Maybe {
private:
    const T data;
    const shared_ptr<Error> error;
public:
    Maybe(T data) : data(std::forward<T>(data)), error(nullptr) {}
    Maybe() : data(nullptr), error(nullptr) {}
    Maybe(decltype(nullptr) nothing) : data(nullptr), error(nullptr) {}
    Maybe(Error&& error) : data(nullptr), error(make_shared<Error>(error)) {}
        
    bool isEmpty() { return (data == nullptr); };
    bool hasError() { return (error != nullptr); };
    T operator()(){ return data; };
    shared_ptr<Error> getError(){ return error; };
};
    
template <class T>
Maybe<T> just(T t)
{
    return Maybe<T>(t);
}

Note, that you don’t have to use Maybe<> and here it’s used only for example.

Here we also use the fact that nullptr in C++11 has it’s own type. Maybe has defined constructor from that type producing nothing state. So when you return result from findUser function, there is no need for explicit conversion into Maybe<> – you can just return User or nullptr, and proper constructor will be called.

Operator () returns possible value without any checks, and getError() returns possible error.

Function just() is used for explicit Maybe<T> construction (this is standard name).

Logging aspect

First, let’s rewrite log aspect so it will calculate execution time using std::chrono. Also let’s add new string parameter as name for called function which will be printed to log.

template <typename R, typename ...Args>
std::function<R(Args...)> logged(string name, std::function<R(Args...)> f)
{
        return [f,name](Args... args){
           
            LOG << name << " start" << NL;
            auto start = std::chrono::high_resolution_clock::now();
            
            R result = f(std::forward<Args>(args)...);
            
            auto end = std::chrono::high_resolution_clock::now();
            auto total = std::chrono::duration_cast<std::chrono::microseconds>(end - start).count();
            LOG << "Elapsed: " << total << "us" << NL;
            
            return result;
        };
}

Note std::forward here for passing arguments more clean way. We don’t need to specify return type as Maybe<R> because we don’t need to perform any specific action like error checking here.

‘Try again’ aspect

What if we have failed to get data (for example, in case of disconnect). Let’s create aspect which will in case of error perform same query one more time to be sure.

// If there was error - try again
template <typename R, typename ...Args>
std::function<Maybe<R>(Args...)> triesTwice(std::function<Maybe<R>(Args...)> f)
{
        return [f](Args... args){
            Maybe<R> result = f(std::forward<Args>(args)...);
            if (result.hasError())
                return f(std::forward<Args>(args)...);
            return result;
        };
}

Maybe<> is used here to identify error state. This method can be extended – we could check error code and decide is there any sense to perform second request (was it network problem or database reported some format error).

Cache aspect

Next thing – let’s add client side cache and check inside it before performing actual server-side request (in functional world this is called memoization). To emulate cache here we can just use std::map:

map<int,User> userCache;

// Use local cache (memoize)
template <typename R, typename C, typename K, typename ...Args>
std::function<Maybe<R>(K,Args...)> cached(C & cache, std::function<Maybe<R>(K,Args...)> f)
{
        return [f,&cache](K key, Args... args){
            // get key as first argument
            
            if (cache.count(key) > 0)
                return just(cache[key]);
            else
            {
                Maybe<R> result = f(std::forward<K>(key), std::forward<Args>(args)...);
                if (!result.hasError())
                    cache.insert(std::pair<int, R>(key, result())); //add to cache
                return result;
            }
        };
}

This function will insert element into cache if it was not there. Here we used that knowledge that cache is std::map, but it can be changed to any key-value container hidden behind some interface.

Second important part, we used only first function argument here as key. If you have complex request where all parameters should act as composite key – what to do? It’s still possible and there are a lot of ways to make it. First way is just to use std::tuple as key (see below). Second way is to create cache class which will allow several key parameters. Third way is to combine arguments into single string cache using variadic templates. Using tuple approach we can rewrite it like this:

map<tuple<int>,User> userCache;

// Use local cache (memoize)
template <typename R, typename C, typename ...Args>
std::function<Maybe<R>(Args...)> cached(C & cache, std::function<Maybe<R>(Args...)> f)
{
        return [f,&cache](Args... args){
            
            // get key as tuple of arguments
            auto key = make_tuple(args...);
            
            if (cache.count(key) > 0)
                return just(cache[key]);
            else
            {
                Maybe<R> result = f(std::forward<Args>(args)...);
                if (!result.hasError())
                    cache.insert(std::pair<decltype(key), R>(key, result())); //add to cache
                return result;
            }
        };
}

Now it’s much more universal.

Security aspect

Never forget about security. Let’s emulate user session with some dummy class –

class Session {
public:
    bool isValid() { return true; }
} session;

Security checking high-order function will have additional parameter – session. Checking will only verify that isValid() field is true:

// Security checking
template <typename R, typename ...Args, typename S>
std::function<Maybe<R>(Args...)> secured(S session, std::function<Maybe<R>(Args...)> f)
{
        // if user is not valid - return nothing
        return [f, &session](Args... args) -> Maybe<R> {
            if (session.isValid())
                return f(std::forward<Args>(args)...);
            else
                return Error(403, "Forbidden");
        };
}

‘Not empty’ aspect

Last thing in this example – let’s treat not found user as error.

// Treat empty state as error
template <typename R, typename ...Args>
std::function<Maybe<R>(Args...)> notEmpty(std::function<Maybe<R>(Args...)> f)
{
        return [f](Args... args) -> Maybe<R> {
            Maybe<R> result = f(std::forward<Args>(args)...);
            if ((!result.hasError()) && (result.isEmpty()))
                return Error(404, "Not Found");
            return result;
        };
}

Im not writing here about error handling aspect, but it’s also can be implemented via same approach. Note that using error propagation inside Maybe<> monad you can avoid using exceptions and define your error processing logic different way.

Multithread lock aspect

template <typename R, typename ...Args>
std::function<R(Args...)> locked(std::mutex& m, std::function<R(Args...)> f)
{
    return [f,&m](Args... args){
        std::unique_lock<std::mutex> lock(m);
        return f(std::forward<Args>(args)...);
    };
}

No comments.

FINALLY

Finally, what for was all this madness?  FOR THIS LINE:

// Aspect factorization

auto findUserFinal = secured(session, notEmpty( cached(userCache, triesTwice( logged("findUser", to_function(findUser))))));

Checking (let’s find user with id 2):

auto user = findUserFinal(2);
LOG << (user.hasError() ? user.getError()->message : user()->name) << NL;

// output:
// 2015-02-02 18:11:52.025 [83151:10571630] findUser start
// 2015-02-02 18:11:52.025 [83151:10571630] Elapsed: 0us
// 2015-02-02 18:11:52.025 [83151:10571630] Bob

Ok, let’s perform tests for several users ( here we will request same user twice and one non-existing user ):

auto testUser = [&](int id) {
    auto user = findUserFinal(id);
    LOG << (user.hasError() ? "ERROR: " + user.getError()->message : "NAME:" + user()->name) << NL;
};

for_each_argument(testUser, 2, 30, 2, 1);

//2015-02-02 18:32:41.283 [83858:10583917] findUser start
//2015-02-02 18:32:41.284 [83858:10583917] Elapsed: 0us
//2015-02-02 18:32:41.284 [83858:10583917] NAME:Bob
//2015-02-02 18:32:41.284 [83858:10583917] findUser start
//2015-02-02 18:32:41.284 [83858:10583917] Elapsed: 0us
// error:
//2015-02-02 18:32:41.284 [83858:10583917] ERROR: Not Found
// from cache:
//2015-02-02 18:32:41.284 [83858:10583917] NAME:Bob
//2015-02-02 18:32:41.284 [83858:10583917] findUser start
//2015-02-02 18:32:41.284 [83858:10583917] Elapsed: 0us
//2015-02-02 18:32:41.284 [83858:10583917] NAME:John

As you can see it’s working as intended. It’s obvious that we got a lot of benefits from such decomposition. Factorisation leads to decoupling of functionality, more modular structure and so on. You gain more focus on actual business logic as result.

We can change order of aspects as we like. And as we made aspect functions rather universal we can reuse them avoiding a lot of code duplication.

Instead of functions we can use more sophisticated functors (with inheritance), and instead of Maybe<> also could be more complex structure to hold some additional info. So whole scheme is extendable.

Note also, that you can pass lambdas as additional aspect parameters.

Working sample to play with: github gist or  ideone

Ps. BONUS:

template <class F, class... Args>
void for_each_argument(F f, Args&&... args) {
    (void)(int[]){(f(forward<Args>(args)), 0)...};
}

 

]]>
https://vitiy.info/c11-functional-decomposition-easy-way-to-do-aop/feed/ 13
Small presentation of my cross-platform engine for mobile and desktop applications https://vitiy.info/small-presentation-of-my-cross-platform-engine-for-mobile-and-desktop-applications/ https://vitiy.info/small-presentation-of-my-cross-platform-engine-for-mobile-and-desktop-applications/#comments Wed, 21 Jan 2015 14:50:39 +0000 http://vitiy.info/?p=444 I made small presentation about my cross-platform engine for mobile and desktop applications. Codename Kobald. Click on image to play it in new window (use arrows and space to move through):

Screenshot 2015-01-21 21.53.14

This is not-so-technical presentation and main info about engine will come later as separate post.

]]>
https://vitiy.info/small-presentation-of-my-cross-platform-engine-for-mobile-and-desktop-applications/feed/ 10
Easy way to auto upload modifications to server under osX (live development) https://vitiy.info/easy-way-to-auto-upload-modifications-live-development/ https://vitiy.info/easy-way-to-auto-upload-modifications-live-development/#respond Thu, 02 Oct 2014 19:57:54 +0000 http://vitiy.info/?p=340 Here is very simple way to setup immediate automatic upload of source code modifications to server via ssh. This gives you ability to perform live development and testing of your solutions. There are a lot of utilities to achieve this and you even can write some not-so-comlicated script doing recurrent compare of file modification times yourself – but there is very easy solution from Facebook, called watchman. To install it under OsX use Brew

brew install watchman

Watchman will call upload script for each modified file – lets write this script and call it uploadauto.sh. It will be really short!

MYSERVER=11.22.33.44
scp $1 $MYSERVER:~/folder/on/server/$1

I assume you have ssh keys on server and don’t do manual password entry. Put this script into the folder you want to synchronise. And finally we need to say to watchman to look over this folder and call our script when something changes:

watchman watch /Users/me/project1
watchman -- trigger /Users/me/project1 upload '*.*' -- ./uploadauto.sh

Replace here /Users/me/project1 with your folder name. upload is the name of the trigger. ‘*.*’ is the mask for files to be monitored. More information about trigger syntax can be found here.

And thats all!

To stop the trigger you can use: watchman triggerdel /path/to/dir triggername. To stop watching over folder: watchman watchdel /path/to/dir

Also you can add to script some actions to notify the server to rebuild the service on server side or perform some post processing there. So you can customise the process as you want.

This solution should also work under Linux systems with inotify.

ps. Dont use this for deployment – only for testing and developing.  For production i recommend use version control repositories from your own distribution server.

pss. This tool was initially developed to reduce compilation time so you can setup it to compile each file upon modification. I guess you can imagine other applications.

 

]]>
https://vitiy.info/easy-way-to-auto-upload-modifications-live-development/feed/ 0
Is 4K TV effective for software development? https://vitiy.info/is-4k-tv-effective-for-software-development/ https://vitiy.info/is-4k-tv-effective-for-software-development/#comments Tue, 30 Sep 2014 10:24:21 +0000 http://vitiy.info/?p=330 Until recent time i thought that developer who tried 2x 24 inch fullhd monitors will stay with this setup as must have option for more productive work. (If you still below this setup you must be located at the beach, drink some cocktails and just cant bring monitors with you.) But now is the time to move further upgrading your workplace – now i use 4K 42-inch TV for development.

4K tv for software dev

Why now? In 2014 manufacturers released new cheap UHD devices in range between 39-42 inches. And by cheap i mean less than 1000 USD (I bought some LG 42UB820V model as it had IPS lcd and HDMI 2.0). Only 4K UHD resolution makes 72dpi and can be used as “normal” computer monitor when you sit close to it.

What for? I suppose as developer you know the advantages of having large workspace in terms of resolution. When you switch from 1280×800 to 1920×1200 you feel joy as now you have to waste less time scrolling, switching windows, finding things and so on. When you go to 2-monitor setup with fullhd resolution – you can open several IDEs simultaneously, do live development, run virtual machine at second monitor and so on. And finally when you go to 3840×2160 (4K) you get twice more space again and this is amazing space to work with when its 42 inches big. So effectively you have 4 full hd monitors! WOW was my first word when i plugged it in. This is perfect for development.

4k resolution

You can now open several virtual machines, several IDEs and still have some space for terminals, finders and so on. Overall this boosts your productivity as you now can arrange all chain of applications you need for implementation of tasks. For example, you you do some web development – you can open database view, js code editor, css editor and web browser to see your changes right away (and may be some ftp clients, ssh shells, etc). You can see all this chain at the same time and this boosts your productivity. You waste much less time switching and finding your windows and looking at whole picture can help you to concentrate on your real task.

I personally enjoy having several XCodes, Eclipse, sublime text, brackets, terms and a lot of other stuff opened at the same time to get more fluent development process. For me there are no problems such as “you have to turn your head”. I don’t have such issues. Personally its very comfortable to work and if you shift from 2-monitor setup it will be smooth transaction.

Also there is nice feature that you can resize windows as large vertical columns. You can view web pages and code this way and get much more information from single view. Once again less scrolling.

Of course you will not get 50% productivity boost (if you re not mad copy-paster). As i remember some old sources stated about more than 10% boost in case of second monitor (Adding a second screen can achieve productivity increases of 9 to 50 percent – microsoft research). So even if it is only 9% for 2 mons its already worth it! And if you go further to 4K you can expect another 10%. And if you work with complex systems and environments this number can be even bigger. So its actual 15-20% productivity boost in comparison to single notebook screen. And for me it really feels like this. Nice!

Any disadvantages?

Frequency limit. 4K resolution requires a lot of bandwidth from your video system. You need to have HDMI 1.4 at least to produce output at 30Hz and HDMI 2.0 to produce 60Hz. When you buy TV be sure it has HDMI 2.0 input. Another option is DisplayPort – there is only one problem – most TV dont have it at all. When im writing this there is only one panasonic tv (and its 65 inches) and its not suitable for developing. Also you have to get new mac/pc with good video card to handle it.

So right now you have to expect it work as 30Hz. I was afraid that lag will be pain… but after 10 minutes you dont feel cursor slowness at all! Scrolling and animations are also ok – not so smooth as 60 fps, of course, but it is not slow as i expected. Its not suitable for gaming, but its enough for work. Watching 4K video is also possible through HDMI 1.4 as 30Hz is enough (keep in mind that you need to have good CPU for 4K codecs). So overall 30Hz is enough for software development.

Second problem is “ugly fonts on tv”. When i first plugged in TV i was shocked as text become almost unreadable. But there are some cures for it. First – disable TV postprocessing of signal. Set game mode. Set saturation and sharpening to zero. Lower contrast slider. Disable all other tricky TV filters as they will only corrupt the text. Those actions will change font output to more readable form.

Under mac also there are some scripts for making it even more nice-looking, as mac has some special font processing. One script change font smoothing algorithm (defaults write -g AppleFontSmoothing -int 2). Second script changes color transfer mode from CMYC to RGB (http://www.ireckon.net/2013/03/force-rgb-mode-in-mac-os-x-to-fix-the-picture-quality-of-an-external-monitor). This did not work for me – but may be i was not stubborn enough with running it properly.

So after some tuning font shapes became ok, but not perfect. If you are retina fan you will suffer some discomfort, but as you work in such vast workspace this becomes minor bearable disadvantage imho.

Finally (in my case):

– Pros – super vast working area, price is low enough now, one solid working space, ability to see a lot of apps at the same time

– Cons – 30Hz only for now,  fonts are not so perfect under osX

Verdict:

I totally love this setup for working! So if you spend a lot of time developing apps as me i totally recommend this as nice instrument to make your life more productive and fun.

If you are buying hardware for your office devs consider discussing TVs instead of 2-monitors setups. I bet they will be very happy with it.

PS. New retina age will still come soon to PC with retina displays (which i already have at my macpro). I waited for Apple to announce theirs external retina display which is rumoured to have 5K resolution. I think it will be only iMac (because of bandwidth problems i described earlier). Anyway now there are 28inch monitors with 4K resolution (from DELL, samsung, etc) and you can make 2x monitor setup as 2x 4K monitors and have retina beauty for your fonts and images – but it will not give you new actual additional work space. This is not bad as upgrade, but not as amazing as 42 inch field. So i think my next setup will be 8K Retina tv-like monitor.

ADDITION 1:

Here are some macro photos of lcd matrix. View as 100% because browser resize will produce artefacts. The distance between pixels is not visible in real life. Also there no noticeable differences in color depth.

4kscreen 029

Brackets

4k screen macro photo 1

Icon

Xcode

Xcode

Dock

Dock

]]>
https://vitiy.info/is-4k-tv-effective-for-software-development/feed/ 20
Small guide: how to support immersive mode under Android 4.4+ https://vitiy.info/small-guide-how-to-support-immersive-mode-under-android-4-4/ https://vitiy.info/small-guide-how-to-support-immersive-mode-under-android-4-4/#comments Thu, 19 Jun 2014 08:38:14 +0000 http://vitiy.info/?p=284 In Android 4.4 version we have new Immersive mode which allows to make system bars translucent and extend application area to fit all screen. This looks great as it gives more space to application in terms of usability and it creates more stylish look if application design is done accordingly.

Immersive mode for TripBudget

I added support of immersive mode to TripBudget. And there were some unexpected troubles on the way so i decided to write some kind of guide here.

Note: I talk here about immersive mode where bottom navigation bar is hidden – using SYSTEM_UI_FLAG_IMMERSIVE_STICKY flag. This guide is aimed to gather and solve all problems which prevent sticky mode to work properly.

First we need to check if device supports immersive mode

public boolean areTranslucentBarsAvailable()
{
  try {
	int id = context.getResources().getIdentifier("config_enableTranslucentDecor", "bool", "android");
	if (id == 0) {
	        // not on KitKat
		return false;
	} else {
	    boolean enabled = context.getResources().getBoolean(id);
	    // enabled = are translucent bars supported on this device
	    return enabled;
	}
  } catch (Exception e) { return false; }
}

There are several approaches how to enable the mode itself – i like programmatic way without any xml scheme modifications. We need just set some properties of window to enable translucent system bars (status bar and bottom navigation bar).

Window w = activity.getWindow();
w.setFlags(WindowManager.LayoutParams.FLAG_TRANSLUCENT_NAVIGATION, WindowManager.LayoutParams.FLAG_TRANSLUCENT_NAVIGATION);
w.setFlags(WindowManager.LayoutParams.FLAG_TRANSLUCENT_STATUS, WindowManager.LayoutParams.FLAG_TRANSLUCENT_STATUS);
w.setFlags(WindowManager.LayoutParams.FLAG_FORCE_NOT_FULLSCREEN,WindowManager.LayoutParams.FLAG_FORCE_NOT_FULLSCREEN);

And than we setup View.SYSTEM_UI_FLAG_IMMERSIVE_STICKY flag. This will hide ugly navigation bar on devices with no hardware buttons (like Nexus 4).

Window w = activity.getWindow();
w.getDecorView().setSystemUiVisibility(View.SYSTEM_UI_FLAG_LAYOUT_STABLE
    | View.SYSTEM_UI_FLAG_LAYOUT_HIDE_NAVIGATION
    | View.SYSTEM_UI_FLAG_HIDE_NAVIGATION
    | View.SYSTEM_UI_FLAG_IMMERSIVE_STICKY);

In perfect world that would be enough, but the main problem raises when we need to keep immersive mode during application life cycle. Solution suggested by Google is pretty simple – you just handle onSystemUiVisibilityChange / onWindowFocusChanged events. Like this:

@Override
public void onWindowFocusChanged(boolean hasFocus) {
        super.onWindowFocusChanged(hasFocus);
    if (hasFocus) {
        decorView.setSystemUiVisibility(
                View.SYSTEM_UI_FLAG_LAYOUT_STABLE
                | View.SYSTEM_UI_FLAG_LAYOUT_HIDE_NAVIGATION
                | View.SYSTEM_UI_FLAG_LAYOUT_FULLSCREEN
                | View.SYSTEM_UI_FLAG_HIDE_NAVIGATION
                | View.SYSTEM_UI_FLAG_FULLSCREEN
                | View.SYSTEM_UI_FLAG_IMMERSIVE_STICKY);}
}

But there are still some problems/bugs to solve:

  • Some versions of 4.4 breaks immersive mode when user pushes volume buttons
  • Sometimes onSystemUiVisibilityChange is not called at all!
  • When you switch between apps Immersive mode can be broken when is used with sticky mode (Lower navigation bar remains unhidden)
  • When you push back button on nav bar it becomes black and stays that way!

When we set mode for the first time we set some handlers:

final View decorView = w.getDecorView();
decorView.setOnSystemUiVisibilityChangeListener (new View.OnSystemUiVisibilityChangeListener() {
    @Override
    public void onSystemUiVisibilityChange(int visibility) {
        restoreTransparentBars();
    }
});
	            
decorView.setOnFocusChangeListener(new View.OnFocusChangeListener() {
    @Override
    public void onFocusChange(View v, boolean hasFocus) {
        restoreTransparentBars();					
    }
});

Also we set the some refreshers at OnResume and OnWindowFocusChanged. And at onKeyDown event also!

As was proposed here for solution of volume up/down problem we handle onKeyDown event specific way. Not only we try to restore immersive mode instantly when event is fired but use delayed handler:

private Handler mRestoreImmersiveModeHandler = new Handler();
private Runnable restoreImmersiveModeRunnable = new Runnable()
{
    public void run() 
    {
    	restoreTransparentBars();	    	
    }
};
	
public void restoreTranslucentBarsDelayed()
{
	// we restore it now and after 500 ms!
	if (isApplicationInImmersiveMode) {
		restoreTransparentBars();
		mRestoreImmersiveModeHandler.postDelayed(restoreImmersiveModeRunnable, 500);
	}
}

@Override 
public boolean onKeyDown(int keyCode, KeyEvent event) 
{
    if(keyCode == KeyEvent.KEYCODE_BACK ||keyCode == KeyEvent.KEYCODE_VOLUME_DOWN || keyCode == KeyEvent.KEYCODE_VOLUME_UP)
    {
        restoreTranslucentBarsDelayed;
    }

    return super.onKeyDown(keyCode, event);
}

And the last part of tricky restore code – my function for restoring not just applies sticky mode but it clears it first. This solves problems when user switches between apps (checked on Nexus 4).

@TargetApi(19)
public void restoreTransparentBars()
{
	if (isApplicationInImmersiveMode)
		try {
			Window w = activity.getWindow();
			w.getDecorView().setSystemUiVisibility(
						View.SYSTEM_UI_FLAG_LAYOUT_STABLE
						| View.SYSTEM_UI_FLAG_LAYOUT_HIDE_NAVIGATION
						| View.SYSTEM_UI_FLAG_HIDE_NAVIGATION
						);
				
			w.getDecorView().setSystemUiVisibility(
						View.SYSTEM_UI_FLAG_LAYOUT_STABLE
						| View.SYSTEM_UI_FLAG_LAYOUT_HIDE_NAVIGATION
						| View.SYSTEM_UI_FLAG_HIDE_NAVIGATION
						| View.SYSTEM_UI_FLAG_IMMERSIVE_STICKY);
				
		} catch (Exception e) {}
}

In my implementation handlers at resume and focus events call restoreTranslucentBarsDelayed();

Last concern – your app should be aware of transparent status bar. To get the height of status bar i use this code:

// Get the height of status bar
return context.getResources().getDimensionPixelSize(context.getResources().getIdentifier("status_bar_height", "dimen", "android"));

Enjoy using immersive mode 🙂

May be not all problems are yet covered in this small guide – so feel free to describe them and add your solutions in comments.

]]>
https://vitiy.info/small-guide-how-to-support-immersive-mode-under-android-4-4/feed/ 12
Perfomance of complex GL ES 2.0 shaders on mobile devices https://vitiy.info/perfomance-of-complex-gl-es-2-0-shaders-on-mobile-devices/ https://vitiy.info/perfomance-of-complex-gl-es-2-0-shaders-on-mobile-devices/#respond Sun, 18 Aug 2013 12:29:40 +0000 http://vitiy.info/?p=75 There is very nice site http://glsl.heroku.com/, where you can find the gallery of complex GLSL shaders (from very simple gradients to very complex rendering systems). You can modify their code at real-time using provided editor:

heroku

Current implementation of WebGL is using GL ES 2.0 – the same as all mordern android / iOS phones/tablets. So i decided to test if i can use these shaders at mobile applications – and tested their performance on Sumsung Galaxy Note II. Of course i tested only relatively simple shaders expecting them to run slow…

The results were quite disappointing to me. Only very very simple shaders were working fine. More complex shaders (which were the point of interest) were running at 1 fps or less. Looks like even such modern phone as Note II has no driver optimization for such shaders and even more – some shaders became corrupted – see next picture:

Corrupted shader in Galaxy Note ||

So today unfortunately we cant use such shaders at mobile production. But i hope situation will change in future.

UPD (28.12.20013): I performed some tests on Google Nexus 5 phone and got slightly more positive results. Not trivial (but not so complex) shaders began to work – not so smooth as it should be but fps is above 20. I even added 2 example experimental effects to Alive numbers 2 alive wallpaper on Android (they are visible in elite mode as bonus experimental content). Under Samsung Note 2 these animations do not work at all, but under Nexus 5 they run more or less.

shader1   shader2

 

Later i will perform some test on new IPad mini with retina screen. May be it will show even better results. I expect tablets to show better results.

 

 

]]>
https://vitiy.info/perfomance-of-complex-gl-es-2-0-shaders-on-mobile-devices/feed/ 0
Новогодний скринсейвер-открытка 2012 https://vitiy.info/%d0%bd%d0%be%d0%b2%d0%be%d0%b3%d0%be%d0%b4%d0%bd%d0%b8%d0%b9-%d1%81%d0%ba%d1%80%d0%b8%d0%bd%d1%81%d0%b5%d0%b9%d0%b2%d0%b5%d1%80-%d0%be%d1%82%d0%ba%d1%80%d1%8b%d1%82%d0%ba%d0%b0-2012/ https://vitiy.info/%d0%bd%d0%be%d0%b2%d0%be%d0%b3%d0%be%d0%b4%d0%bd%d0%b8%d0%b9-%d1%81%d0%ba%d1%80%d0%b8%d0%bd%d1%81%d0%b5%d0%b9%d0%b2%d0%b5%d1%80-%d0%be%d1%82%d0%ba%d1%80%d1%8b%d1%82%d0%ba%d0%b0-2012/#comments Fri, 30 Dec 2011 10:41:52 +0000 http://vitiy.info/?p=52 screen400

Давно я не писал в блог – но есть хороший повод! Чтобы качественно поздравить всех с новым годом, я написал новогодний скринсейвер-открытку (QT OpenGl). Для запуска необходима не древняя видеокарта и монитор побольше. Happy new year 2012!

Скачать ~4Mb

]]>
https://vitiy.info/%d0%bd%d0%be%d0%b2%d0%be%d0%b3%d0%be%d0%b4%d0%bd%d0%b8%d0%b9-%d1%81%d0%ba%d1%80%d0%b8%d0%bd%d1%81%d0%b5%d0%b9%d0%b2%d0%b5%d1%80-%d0%be%d1%82%d0%ba%d1%80%d1%8b%d1%82%d0%ba%d0%b0-2012/feed/ 1
Курсы доллара и евро: гаджет для висты https://vitiy.info/%d0%ba%d1%83%d1%80%d1%81%d1%8b-%d0%b4%d0%be%d0%bb%d0%bb%d0%b0%d1%80%d0%b0-%d0%b8-%d0%b5%d0%b2%d1%80%d0%be-%d0%b3%d0%b0%d0%b4%d0%b6%d0%b5%d1%82-%d0%b4%d0%bb%d1%8f-%d0%b2%d0%b8%d1%81%d1%82%d1%8b/ https://vitiy.info/%d0%ba%d1%83%d1%80%d1%81%d1%8b-%d0%b4%d0%be%d0%bb%d0%bb%d0%b0%d1%80%d0%b0-%d0%b8-%d0%b5%d0%b2%d1%80%d0%be-%d0%b3%d0%b0%d0%b4%d0%b6%d0%b5%d1%82-%d0%b4%d0%bb%d1%8f-%d0%b2%d0%b8%d1%81%d1%82%d1%8b/#comments Fri, 05 Sep 2008 10:30:57 +0000 http://vitiy.info/?p=50 Давненько хотел написать какой нить гаджет для висты. Изначально хотел сделать это на WPF, но выяснилось что к гаджетам у микрософта другой подход. Посути гаджет – это html веб страница со всеми вытекающим. Поэтому только xbap или silverlight можно засунуть в гаджет (причем стало это можно сделать относительно недавно).

Посмотрев в каталоге гаджетов, гаджеты, которые показывают курс валют, и не найдя там ничего интересного, я решил написать свой монитор курсов. У нашего центрабанка есть прекрасный веб сервис, который предоставляет всю информацию о курсах валют за любой период.

Попытка использовать сильверлайт закончилась неудачно. Во первых, под 64-битной вистой сильверлайт не работает в 64-битном сайдбаре. Это можно обойти, запуская 32-ух битную версию сайдбара, но это уже извращение. Во вторых, из сильверлайта в гаджете нельзя нормально обратиться к вебсервису. Это связано с тем, что сильверлайт в гаджете не видит конфигурационных xml файлов и не может получить доступ. Есть workaround, который передает данные в сильверлайт контрол через скрипт AJAX, но я считаю это не очень красивым. 

В итоге я сделал проще – гаджет просто показывает картинку с вебсервера, обновляя ее раз в час. А на сервере работает php скрипт по крону, который запрашивает данные у центробанка. Гаджет показывает текущий курс бакса и евро, на сколько он изменился за день и за неделю и график динамики курсов за 3 недели. 

Гаджет курса валют ЦБ для висты

Скачать гаджет

Просто запустите скаченный файл, и гаджет установится. Если этого не произойдет и он откроется как зип архив, то можно на гаджете нажaть Open with… Sidebar. Если и это не поможет, то можно создать папку C:\Users\Ваше имя\AppData\Local\Microsoft\Windows Sidebar\Gadgets\CurrencyRates.gadget\ и в нее скопировать содержимое архива. 

]]>
https://vitiy.info/%d0%ba%d1%83%d1%80%d1%81%d1%8b-%d0%b4%d0%be%d0%bb%d0%bb%d0%b0%d1%80%d0%b0-%d0%b8-%d0%b5%d0%b2%d1%80%d0%be-%d0%b3%d0%b0%d0%b4%d0%b6%d0%b5%d1%82-%d0%b4%d0%bb%d1%8f-%d0%b2%d0%b8%d1%81%d1%82%d1%8b/feed/ 21