Quantcast
Channel: Woboq
Viewing all 76 articles
Browse latest View live

How Qt Signals and Slots Work - Part 2 - Qt5 New Syntax

$
0
0

This is the sequel of my previous article explaining the implementation details of the signals and slots. In the Part 1, we have seen the general principle and how it works with the old syntax. In this blog post, we will see the implementation details behind the new function pointer based syntax in Qt5.

New Syntax in Qt5

The new syntax looks like this:

QObject::connect(&a, &Counter::valueChanged,&b, &Counter::setValue);

Why the new syntax?

I already explained the advantages of the new syntax in adedicated blog entry. To summarize, the new syntax allows compile-time checking of the signals and slots. It also allows automatic conversion of the arguments if they do not have the same types. As a bonus, it enables the support for lambda expressions.

New overloads

There was only a few changes required to make that possible.
The main idea is to have new overloads to QObject::connect which take the pointers to functions as arguments instead of char*

There are three new static overloads of QObject::connect: (not actual code)

  1. QObject::connect(const QObject *sender, PointerToMemberFunctionsignal,
                     const QObject *receiver, PointerToMemberFunctionslot,
                     Qt::ConnectionType type)
  2. QObject::connect(const QObject *sender, PointerToMemberFunctionsignal,PointerToFunctionmethod)
  3. QObject::connect(const QObject *sender, PointerToMemberFunctionsignal,Functormethod)

The first one is the one that is much closer to the old syntax: you connect a signal from the sender to a slot in a receiver object. The two other overloads are connecting a signal to a static function or a functor object without a receiver.

They are very similar and we will only analyze the first one in this article.

Pointer to Member Functions

Before continuing my explanation, I would like to open a parenthesis to talk a bit about pointers to member functions.

Here is a simple sample code that declares a pointer to member function and calls it.

void (QPoint::*myFunctionPtr)(int); // Declares myFunctionPtr as a pointer to// a member function returning void and                                      // taking (int) as parametermyFunctionPtr = &QPoint::setX;QPointp;QPoint *pp = &p;
  (p.*myFunctionPtr)(5); // calls p.setX(5);
  (pp->*myFunctionPtr)(5); // calls pp->setX(5);

Pointers to member and pointers to member functions are usually part of the subset of C++ that is not much used and thus lesser known.
The good news is that you still do not really need to know much about them to use Qt and its new syntax. All you need to remember is to put the& before the name of the signal in your connect call. But you will not need to cope with the ::*, .* or ->* cryptic operators.

These cryptic operators allow you to declare a pointer to a member or access it. The type of such pointers includes the return type, the class which owns the member, the types of each argument and the const-ness of the function.

You cannot really convert pointer to member functions to anything and in particular not tovoid* because they have a different sizeof.
If the function varies slightly in signature, you cannot convert from one to the other. For example, even converting from void (MyClass::*)(int) const tovoid (MyClass::*)(int) is not allowed. (You could do it with reinterpret_cast; but that would be an undefined behaviour if you call them, according to the standard)

Pointer to member functions are not just like normal function pointers. A normal function pointer is just a normal pointer the address where the code of that function lies. But pointer to member function need to store more information: member functions can be virtual and there is also an offset to apply to the hidden this in case of multiple inheritance.
sizeof of a pointer to a member function can even vary depending of the class. This is why we need to take special care when manipulating them.

Type Traits: QtPrivate::FunctionPointer

Let me introduce you to the QtPrivate::FunctionPointer type trait.
A trait is basically a helper class that gives meta data about a given type. Another example of trait in Qt isQTypeInfo.

What we will need to know in order to implement the new syntax is information about a function pointer.

The template<typename T> struct FunctionPointer will give us information about T via its member.

  • ArgumentCount: An integer representing the number of arguments of the function.
  • Object: Exists only for pointer to member function. It is a typedef to the class of which the function is a member.
  • Arguments: Represents the list of argument. It is a typedef to a meta-programming list.
  • call(T &function, QObject *receiver, void **args): A static function that will call the function, applying the given parameters.

Qt still supports C++98 compiler which means we unfortunately cannot require support for variadic templates. Therefore we had to specialize our trait function for each number of arguments. We have four kinds of specializationd: normal function pointer, pointer to member function, pointer to const member function and functors. For each kind, we need to specialize for each number of arguments. We support up to six arguments. We also made a specialization using variadic template so we support arbitrary number of arguments if the compiler supports variadic templates.

The implementation of FunctionPointer lies in qobjectdefs_impl.h.

QObject::connect

The implementation relies on a lot of template code. I am not going to explain all of it.

Here is the code of the first new overload from qobject.h:

template<typename Func1, typename Func2>staticinlineQMetaObject::Connectionconnect(consttypenameQtPrivate::FunctionPointer<Func1>::Object *sender, Func1 signal,consttypenameQtPrivate::FunctionPointer<Func2>::Object *receiver, Func2 slot,Qt::ConnectionTypetype = Qt::AutoConnection)
{typedefQtPrivate::FunctionPointer<Func1> SignalType;typedefQtPrivate::FunctionPointer<Func2> SlotType;//compilation error if the arguments does not match.Q_STATIC_ASSERT_X(int(SignalType::ArgumentCount) >= int(SlotType::ArgumentCount),"The slot requires more arguments than the signal provides.");Q_STATIC_ASSERT_X((QtPrivate::CheckCompatibleArguments<typenameSignalType::Arguments,typenameSlotType::Arguments>::value),"Signal and slot arguments are not compatible.");Q_STATIC_ASSERT_X((QtPrivate::AreArgumentsCompatible<typenameSlotType::ReturnType,typenameSignalType::ReturnType>::value),"Return type of the slot is not compatible with the return type of the signal.");constint *types;/* ... Skipped initialization of types, used for QueuedConnection ...*/QtPrivate::QSlotObjectBase *slotObj = newQtPrivate::QSlotObject<Func2,typenameQtPrivate::List_Left<typenameSignalType::Arguments, SlotType::ArgumentCount>::Value,typenameSignalType::ReturnType>(slot);returnconnectImpl(sender, reinterpret_cast<void **>(&signal),receiver, reinterpret_cast<void **>(&slot), slotObj,type, types, &SignalType::Object::staticMetaObject);
}

You notice in the function signature that sender and receiver are not just QObject* as the documentation points out. They are pointers totypename FunctionPointer::Object instead. This uses SFINAE to make this overload only enabled for pointers to member functions because the Object only exists in FunctionPointer if the type is a pointer to member function.

We then start with a bunch ofQ_STATIC_ASSERT. They should generate sensible compilation error messages when the user made a mistake. If the user did something wrong, it is important that he/she sees an error here and not in the soup of template code in the _impl.h files. We want to hide the underlying implementation from the user who should not need to care about it.
That means that if you ever you see a confusing error in the implementation details, it should be considered as a bug that should be reported.

We then allocate a QSlotObject that is going to be passed to connectImpl(). The QSlotObject is a wrapper around the slot that will help calling it. It also knows the type of the signal arguments so it can do the proper type conversion.
We use List_Left to only pass the same number as argument as the slot, which allows connecting a signal with many arguments to a slot with less arguments.

QObject::connectImpl is the private internal function that will perform the connection. It is similar to the original syntax, the difference is that instead of storing a method index in the QObjectPrivate::Connection structure, we store a pointer to the QSlotObjectBase.

The reason why we pass &slot as a void** is only to be able to compare it if the type is Qt::UniqueConnection.

We also pass the &signal as a void**. It is a pointer to the member function pointer. (Yes, a pointer to the pointer)

Signal Index

We need to make a relationship between the signal pointer and the signal index.
We use MOC for that. Yes, that means this new syntax is still using the MOC and that there are no plans to get rid of it :-).

MOC will generate code in qt_static_metacall that compares the parameter and returns the right index.connectImpl will call the qt_static_metacall function with the pointer to the function pointer.

voidCounter::qt_static_metacall(QObject *_o, QMetaObject::Call_c, int_id, void **_a)
{if (_c == QMetaObject::InvokeMetaMethod) {        /* .... skipped ....*/
    } elseif (_c == QMetaObject::IndexOfMethod) {int *result = reinterpret_cast<int *>(_a[0]);void **func = reinterpret_cast<void **>(_a[1]);
        {typedefvoid (Counter::*_t)(int );if (*reinterpret_cast<_t *>(func) == static_cast<_t>(&Counter::valueChanged)) {
                *result = 0;
            }
        }
        {typedefQString (Counter::*_t)(constQString& );if (*reinterpret_cast<_t *>(func) == static_cast<_t>(&Counter::someOtherSignal)) {
                *result = 1;
            }
        }
        {typedefvoid (Counter::*_t)();if (*reinterpret_cast<_t *>(func) == static_cast<_t>(&Counter::anotherSignal)) {
                *result = 2;
            }
        }
    }
}

Once we have the signal index, we can proceed like in the other syntax.

The QSlotObjectBase

QSlotObjectBase is the object passed to connectImpl that represents the slot.

Before showing the real code, this is what QObject::QSlotObjectBase was in Qt5 alpha:

structQSlotObjectBase {QAtomicIntref;QSlotObjectBase() : ref(1) {}virtual~QSlotObjectBase();virtualvoidcall(QObject *receiver, void **a) = 0;virtualboolcompare(void **) { returnfalse; }
};

It is basically an interface that is meant to be re-implemented by template classes implementing the call and comparison of the function pointers.

It is re-implemented by one of the QSlotObject, QStaticSlotObject orQFunctorSlotObject template class.

Fake Virtual Table

The problem with that is that each instantiation of those object would need to create a virtual table which contains not only pointer to virtual functions but also lot of information we do not need such asRTTI. That would result in lot of superfluous data and relocation in the binaries.

In order to avoid that, QSlotObjectBase was changed not to be a C++ polymorphic class. Virtual functions are emulated by hand.

classQSlotObjectBase {QAtomicIntm_ref;typedefvoid (*ImplFn)(intwhich, QSlotObjectBase* this_,QObject *receiver, void **args, bool *ret);constImplFnm_impl;protected:enumOperation { Destroy, Call, Compare };public:explicitQSlotObjectBase(ImplFnfn) : m_ref(1), m_impl(fn) {}inlineintref() Q_DECL_NOTHROW { returnm_ref.ref(); }inlinevoiddestroyIfLastRef() Q_DECL_NOTHROW {if (!m_ref.deref()) m_impl(Destroy, this, 0, 0, 0);
  }inlineboolcompare(void **a) { boolret; m_impl(Compare, this, 0, a, &ret); returnret; }inlinevoidcall(QObject *r, void **a) {  m_impl(Call,    this, r, a, 0); }
};

The m_impl is a (normal) function pointer which performs the three operations that were previously virtual functions. The "re-implementations" set it to their own implementation in the constructor.

Please do not go in your code and replace all your virtual functions by such a hack because you read here it was good. This is only done in this case because almost every call to connect would generate a new different type (since the QSlotObject has template parameters wich depend on signature of the signal and the slot).

Protected, Public, or Private Signals.

Signals were protected in Qt4 and before. It was a design choice as signals should be emittedby the object when its change its state. They should not be emitted from outside the object and calling a signal on another object is almost always a bad idea.

However, with the new syntax, you need to be able take the address of the signal from the point you make the connection. The compiler would only let you do that if you have access to that signal. Writing &Counter::valueChanged would generate a compiler error if the signal was not public.

In Qt 5 we had to change signals from protected to public. This is unfortunate since this mean anyone can emit the signals. We found no way around it. We tried a trick with the emit keyword. We tried returning a special value. But nothing worked. I believe that the advantages of the new syntax overcome the problem that signals are now public.

Sometimes it is even desirable to have the signal private. This is the case for example inQAbstractItemModel, where otherwise, developers tend to emit signal from the derived class which is not what the API wants. There used to be a pre-processor trick that made signals private but it broke the new connection syntax.
A new hack has been introduced.QPrivateSignal is a dummy (empty) struct declared private in the Q_OBJECT macro. It can be used as the last parameter of the signal. Because it is private, only the object has the right to construct it for calling the signal. MOC will ignore the QPrivateSignal last argument while generating signature information. See qabstractitemmodel.h for an example.

More Template Code

The rest of the code is in qobjectdefs_impl.h and qobject_impl.h. It is mostly standard dull template code.

I will not go into much more details in this article, but I will just go over few items that are worth mentioning.

Meta-Programming List

As pointed out earlier, FunctionPointer::Arguments is a list of the arguments. The code needs to operate on that list: iterate over each element, take only a part of it or select a given item.

That is why there isQtPrivate::List that can represent a list of types. Some helpers to operate on it areQtPrivate::List_Select andQtPrivate::List_Left, which give the N-th element in the list and a sub-list containing the N first elements.

The implementation of List is different for compilers that support variadic templates and compilers that do not.

With variadic templates, it is atemplate<typename... T> struct List;. The list of arguments is just encapsulated in the template parameters.
For example: the type of a list containing the arguments (int, QString, QObject*) would simply be:

List<int, QString, QObject *>

Without variadic template, it is a LISP-style list: template<typename Head, typename Tail > struct List; where Tail can be either another List or void for the end of the list.
The same example as before would be:

List<int, List<QString, List<QObject *, void> > >

ApplyReturnValue Trick

In the function FunctionPointer::call, the args[0] is meant to receive the return value of the slot. If the signal returns a value, it is a pointer to an object of the return type of the signal, else, it is 0. If the slot returns a value, we need to copy it in arg[0]. If it returns void, we do nothing.

The problem is that it is not syntaxically correct to use the return value of a function that returns void. Should I have duplicated the already huge amount of code duplication: once for the void return type and the other for the non-void? No, thanks to the comma operator.

In C++ you can do something like that:

functionThatReturnsVoid(), somethingElse();

You could have replaced the comma by a semicolon and everything would have been fine.

Where it becomes interesting is when you call it with something that is not void:

functionThatReturnsInt(), somethingElse();

There, the comma will actually call an operator that you even can overload. It is what we do in qobjectdefs_impl.h

template<typename T>structApplyReturnValue {void *data;ApplyReturnValue(void *data_) : data(data_) {}
};template<typename T, typename U>voidoperator,(const T &value, constApplyReturnValue<U> &container) {if (container.data)
        *reinterpret_cast<U*>(container.data) = value;
}template<typename T>voidoperator,(T, constApplyReturnValue<void> &) {}

ApplyReturnValue is just a wrapper around a void*. Then it can be used in each helper. This is for example the case of a functor without arguments:

staticvoidcall(Function &f, void *, void **arg) {f(), ApplyReturnValue<SignalReturnType>(arg[0]);
  }

This code is inlined, so it will not cost anything at run-time.

Conclusion

This is it for this blog post. There is still a lot to talk about (I have not even mentioned QueuedConnection or thread safety yet), but I hope you found this interresting and that you learned here something that might help you as a programmer.


You were not doing so wrong.

$
0
0

This post is about the use of QThread. It is an answer to a three years old blog post by Brad, my colleague at the time:
You're doing it wrong

In his blog post, Brad explains that he saw many users misusing QThread by sub-classing it, adding some slots to that subclass and doing something like this in the constructor:

 moveToThread(this);

They move a thread to itself. As Brad mentions, it is wrong: the QThread is supposed to be the interface to manage the thread. So it is supposed to be used from the creating thread.

Slots in the QThread object are then not run in that thread and having slots in a subclass of QThread is a bad practice.

But then Brad continues and discourages any sub-classing of QThread at all. He claims it is against proper object-oriented design. This is where I disagree. Putting code in run() is a valid object-oriented way to extend a QThread: A QThread represents a thread that just starts an event loop, a subclass represents a thread that is extended to do what's in run().

After Brad's post, some members of the community went on a crusade against sub-classing QThread. The problem is that there are many perfectly valid reasons to subclass QThread.

With Qt 5.0 and Qt 4.8.4, the documentation of QThread was changed so the sample code does not involve sub-classing. Look at the first code sample of the Qt 4.8 QThread documentation (Update: link to archive.org since the newer documentation is fixed). It has many lines of boiler plate just to run some code in a thread. And the there is even a leak: the QThread is never going to quit and be destroyed.

I was asked on IRC a question from an user who followed that example in order to run some simple code in a thread. He had a hard time to figure out how to properly destroy the thread. That is what motivated me to write this blog entry.

If you allow to subclass QThread, this is what you got:

classWorkerThread : publicQThread {voidrun() {// ...
    }
};voidMyObject::startWorkInAThread()
{WorkerThread *workerThread = newWorkerThread;connect(workerThread, SIGNAL(finished()),workerThread, SLOT(deleteLater()));workerThread->start();
}

This code does no longer leak and is much simpler and has less overhead as it does not create useless object.

The Qt threading examplethreadedfortuneserver is an example that uses this pattern to run blocking operations and is much simpler than the equivalent using a worker object.

I have submitted a patch to the documentation to not discourage sub-classing QThread anymore.

Rules of thumbs

When to subclass and when not to?

  • If you do not really need an event loop in the thread, you should subclass.
  • If you need an event loop and handle signals and slots within the thread, you may not need to subclass.

What about using QtConcurrent instead?

QThread is a quite low level and you should better use a higher level API such as QtConcurrent.

Now, QtConcurrent has its own set of problems: It is tied to a single thread pool so it is not a good solution if you want to run blocking operations. It has also some problems in its implementation that gives some performance overhead. All of this is fixable. Perhaps even Qt 5.1 will see some improvements.

A good alternative is also the C++11 standard library withstd::thread and std::async which are now the standard way to run code in a thread. And the good news is that it still works fine with Qt: All other Qt threading primitives can be used with native threads. (Qt will create automatically create a QThread if required).

QMap vs. QHash: A small benchmark

$
0
0

While working on my Qt developer days 2012 presentation (QtCore in depth), I made a benchmark comparing QMap and QHash. I thought it would be nice to share the results in this short blog entry.

Under The Hood

The Qt 4 containers are well explained by this old Qt Quarterly article.

QHash is implemented using a Hash Table and QMap was implemented using a Skip list in Qt4.

In Qt 5, the implementation of the containers have changed a bit, but the concepts are still the same. Here are the main differences:

  • QVector, QString and QByteArray now share the same implementation (QArrayData). The main difference is that there is now an offset which might allow in the future to reference external data.
  • QMap implementation has totally changed. It is no longer a skip list, but a red-black tree.

The Benchmark

The benchmark is simple and is doing lots of look-ups in a loop during one second and count the number of iterations.
It is not really scientific. The goal is only to show the shape of the curves.

The source: benchmark.cc

The Result

Run on my computer, gcc 4.7. Higher is better. The number of element is on a logarithmic scale. For QHash, one should expect it not to change with the number of elements, and for QMap it should be O(log N): a straight line on a logarithmic scale.

Qt 4.8

QMap performs slightly slower than std::map. QMap lookup is faster than in a QHash for less than about 10 elements.

Qt 5

It was a good idea to change from a skip list to a red-black tree. The performance of the Qt containers compared to the STL are about the same. QMap is faster than QHash if there is less than about 20 elements.

If you compare the number between Qt5 and Qt4 you see that Qt5 performs better. That might be related by the changes in QString.

Conclusion

The typical rule is: Use QMap only if you need the items to be sorted or if you know that you always have a very small amount of items in your map.

iQuassel for iPhone and iPod

$
0
0

In our spare time at Woboq, we also do projects that scratch our own itches. For Olivier this was his web-based source code browser, for me it was my Quassel IRC client for iPad.

A lot of people wanted me to port it to iPhone/iPod too, so before my last China holiday, I bought an iPod touch (Yes, I don't have an iPhone, my Nokia N9 works just fine...) and used some time of my holiday to make those people happy.

The porting itself was going quite fast since with iOS you are forced to work with the MVC pattern. I just had to implement a new UIStoryboard UI for the iPhone/iPod and change my controller code at several points, depending on the result of a call to UI_USER_INTERFACE_IDIOM(). Polishing, testing and fixing took the most time, as usual.

So, without further ado, please enjoy Quassel IRC on iPhone and iPod.

PS: It's a universal app, so if you've already bought the iPad version, you get this one for free.
PPS: Apple didn't like the old icon anymore, so yaccin made a new one.
PPPS: Do you need hosting for your core?

Property Bindings and Declarative Syntax in C++

$
0
0

QtQuick and QML form a really nice language to develop user interfaces. The QML Bindings are very productive and convenient. The declarative syntax is really a pleasure to work with.
Would it be possible to do the same in C++? In this blog post, I will show a working implementation of property bindings in pure C++.

Disclaimer: This was done for the fun of it and is not made for production.

If you read this article from the RSS, you may want to open it in itsoriginal URL to see property formatted code.

Bindings

The goal of bindings is to have one property which depends on other properties. When its dependencies are changed, the property is automatically updated.

Here is an example inspired from theQML documentation.

intcalculateArea(intwidth, intheight) {return (width * height) * 0.5;
}structrectangle {property<rectangle*> parent = nullptr;property<int> width = 150;property<int> height = 75;property<int> area = [&]{ returncalculateArea(width, height); };property<std::string> color = [&]{if (parent()&& area> parent()->area)returnstd::string("blue");elsereturnstd::string("red");
  };
};

If you are not familiar with the [&]{ ... } syntax, this is alambda function. I'm also using the fact that in C++11, you can initialize the members directly in the declaration.

Now, we'll see how this property class works. At the end I will show a cool demo of what you can do.

The code is using lots of C++11 constructs. It has been tested with GCC 4.7 and Clang 3.2.

Property

I have used my knowledge from QML and the QObject system to build something similar with C++ bindings.
The goal is to make a proof of concept. It is not optimized. I just wanted to have comprehensible code for this demo.

The idea behind the property class is the same as in QML. Each property keeps a list of its dependencies. When a binding is evaluated, all access to the property will be recorded as dependencies.

property<T> is a template class. The common part is put in a base class: property_base.

classproperty_base
{/* Set of properties which are subscribed to this one.     When this property is changed, subscriptions are refreshed */std::unordered_set<property_base *> subscribers;/* Set of properties this property is depending on. */std::unordered_set<property_base *> dependencies;public:virtual~property_base()
  { clearSubscribers(); clearDependencies(); }// re-evaluate this propertyvirtualvoidevaluate() = 0;  // [...]protected:/* This function is called by the derived class when the property has changed     The default implementation re-evaluates all the property subscribed to this one. */virtualvoidnotify() {autocopy = subscribers;for (property_base *p : copy) {p->evaluate();
    }
  }/* Derived class call this function whenever this property is accessed.     It register the dependencies. */voidaccessed() {if (current&& current != this) {subscribers.insert(current);current->dependencies.insert(this);
    }
  }voidclearSubscribers() {for (property_base *p : subscribers)p->dependencies.erase(this);subscribers.clear();
  }voidclearDependencies() {for (property_base *p : dependencies)p->subscribers.erase(this);dependencies.clear();
  }/* Helper class that is used on the stack to set the current property being evaluated */structevaluation_scope {evaluation_scope(property_base *prop) : previous(current) {current = prop;
    }~evaluation_scope() { current = previous; }property_base *previous;
  };private:friendstructevaluation_scope;/* thread_local */staticproperty_base *current;
};

Then we have the implementation of the class property.

template<typename T>structproperty : property_base {typedefstd::function<T()> binding_t;property() = default;property(const T &t) : value(t) {}property(constbinding_t&b) : binding(b) { evaluate(); }voidoperator=(const T &t) {value = t;clearDependencies();notify();
  }voidoperator=(constbinding_t&b) {binding = b;evaluate();
  }const T &get() const {const_cast<property*>(this)->accessed();returnvalue;
  }//automatic conversionsconst T &operator()() const { returnget();  }operatorconst T&() const { returnget(); }voidevaluate() override {if (binding) {clearDependencies();evaluation_scopescope(this);value = binding();
    }notify();
  }protected:
  T value;binding_tbinding;
};

property_hook

It is also desirable to be notified when a property is changed, so we can for example call update(). The property_hook class lets you specify a function which will be called when the property changes.

Qt bindings

Now that we have the property class, we can build everything on top of that. We could build for example a set of widgets and use those. I'm going to use Qt Widgets for that. If the QtQuick elements had a C++ API, I could have used those instead.

The property_qobject

I introduce aproperty_qobject which is basically wrapping a property in a QObject. You initialize it by passing a pointer to the QObject and the string of the property you want to track, and voilà.

The implementation is not efficient and it could be optimized by sharing the QObject rather than having one for each property. With Qt5 I could also connect to lambda instead of doing this hack, but I used Qt 4.8 here.

Wrappers

Then I create a wrapper around each class I'm going to use that expose the properties in a property_qobject

A Demo

Now let's see what we are capable of doing:

This small demo just has a line edit which lets you specify a color and few sliders to change the rotation and the opacity of a graphics item.

Let the code speak for itself.

We need a Rectangle object with the proper bindings:

structGraphicsRectObject : QGraphicsWidget {// bind the QObject properties.property_qobject<QRectF> geometry { this, "geometry" };property_qobject<qreal> opacity { this, "opacity" };property_qobject<qreal> rotation { this, "rotation" };// add a color property, with a hook to update when it changesproperty_hook<QColor> color { [this]{ this->update(); } };private:voidpaint(QPainter* painter, constQStyleOptionGraphicsItem* option, QWidget*) override {painter->setBrush(color());painter->drawRect(boundingRect());
  }
};

Then we can proceed and declare a window object with all the subwidgets:

structMyWindow : Widget {LineEditcolorEdit {this};SliderrotationSlider {Qt::Horizontal, this};SlideropacitySlider {Qt::Horizontal, this};QGraphicsScenescene;GraphicsViewview {&scene, this};GraphicsRectObjectrectangle;

  ::property<int> margin {10};MyWindow() {// Layout the items.  Not really as good as real layouts, but it demonstrates bindingscolorEdit.geometry= [&]{ returnQRect(margin, margin,geometry().width() - 2*margin,colorEdit.sizeHint().height()); };rotationSlider.geometry= [&]{ returnQRect(margin,colorEdit.geometry().bottom() + margin,geometry().width() - 2*margin,rotationSlider.sizeHint().height()); };opacitySlider.geometry= [&]{ returnQRect(margin,rotationSlider.geometry().bottom() + margin,geometry().width() - 2*margin,opacitySlider.sizeHint().height()); };view.geometry= [&]{intx = opacitySlider.geometry().bottom() + margin;returnQRect(margin, x, width() - 2*margin, geometry().height() - x - margin); 
    };// Some proper default valuecolorEdit.text=QString("blue");rotationSlider.minimum= -180;rotationSlider.maximum=180;opacitySlider.minimum=0;opacitySlider.maximum=100;opacitySlider.value=100;scene.addItem(&rectangle);// now the 'cool' bindingsrectangle.color= [&]{ returnQColor(colorEdit.text);  };rectangle.opacity= [&]{ returnqreal(opacitySlider.value/100.); };rectangle.rotation= [&]{ returnrotationSlider.value(); };
  }
};intmain(intargc, char **argv)
{QApplicationapp(argc,argv);MyWindowwindow;window.show();returnapp.exec();
}

Conclusion

You can clone the code repository and try it for yourself.

Perhaps one day, a library will provide such property bindings.

Data initialization in C++

$
0
0

In this blog post, I am going to review the different kind of data and how they are initialized in a program.

What I am going to explain here is valid for Linux and GCC.

Code Example

I'll just start by showing a small piece of code. What is going to interest us is where the data will end up in memory and how it is initialized.

const char string_data[] = "hello world"; // .rodata
const int even_numbers[] = { 0*2 , 1*2,  2*2,  3*2, 4*2}; //.rodata

int all_numbers[] = { 0, 1, 2, 3, 4 };  //.data

static inline int odd(int n) { return n*2 + 1; }
const int odd_numbers[] = { odd(0), odd(1), odd(2), odd(3), odd(4) }; //initialized

QString qstring_data("hello QString"); //object with constructor and destructor

I'll analyze the assembly. It has been generated with the following command, then re-formatted for better presentation in this blog post.

g++ -O2 -S data.cpp

(I also had to add a function that uses the data in order to avoid that the compiler removes some arrays that were not used.)

The sections

On Linux, the binaries (program or libraries) are stored as file in the ELF format. Those files are composed of many sections. I'll just go over a few of them:

The code: .text

This section is the actual code of your library or program it contains all the instructions for each function. That part of the code is mapped into memory, and shared between the instances of the processes that uses it (provided the library is compiled as position independent, which is usually the case).

I am not interested in the code in this blog post, let us move to the data sections.

The read-only data: .rodata

This section will be loaded the same way as the .text section is loaded. It will also be shared between processes.

It contains the arrays that are marked as const such as string_data andeven_numbers.

.section    .rodata
_ZL11string_data:
    .string "hello world"
_ZL12even_numbers:
    .long   0
    .long   2
    .long   4
    .long   6
    .long   8

You can see that even if the even_numbers array was initialized with multiplications, the compiler was able to optimize and generate the array at compile time.

The _ZL11 that is part of the name is themangling because it is const.

Writable data: .data

The data section will contain the pre-initialized data that are not read-only.
This section is not shared between processes but copied for each instance of processes that uses it. (Actually, with the copy-on-write optimization in the kernel, it might need to be copied only if the data changes.)

There goes our all_number array that has not been declared as const.

.data
all_numbers:
    .long   0
    .long   1
    .long   2
    .long   3
    .long   4

Initialized at run-time: .bss + .ctors

The compiler was not able to optimize the calls to odd(), it has to be computed at run-time. Where will our odd_numbers array be stored?

What will happen is that it will not be stored in the binary, but some space will be reserved in the .bss section. That section is just some memory which is allocated to each process, it is initialized to 0.

The binary also contains a section with code that is going to be executed beforemain() is being called.

.section    .text.startup
_GLOBAL__sub_I_odd_numbers:
    movl    $1, _ZL11odd_numbers(%rip)
    movl    $3, _ZL11odd_numbers+4(%rip)
    movl    $5, _ZL11odd_numbers+8(%rip)
    movl    $7, _ZL11odd_numbers+12(%rip)
    movl    $9, _ZL11odd_numbers+16(%rip)
    ret

.section    .ctors,"aw",@progbits
    .quad   _GLOBAL__sub_I_odd_numbers

.local  _ZL11odd_numbers  ; reserve 20 bytes in the .bss section
    .comm   _ZL11odd_numbers,20,16

The .ctor section contains a table of pointers to functions that are going to be called by the loader before it calls main(). In our case, there is only one, the code that initializes the odd_numbers array.

Global Object

How about our QString? It is a global C++ object with a constructor and destructor. It is simply initialized by running the constructor at start-up.

.section    .rodata.str1.1,"aMS",@progbits,1
.LC0:
    .string "hello QString"

.section    .text.startup,"ax",@progbits
_GLOBAL__sub_I_qstring_data:
       ; QString constructor (inlined)
    movl    $-1, %esi
    movl    $.LC0, %edi
    call    _ZN7QString16fromAscii_helperEPKci
    movq    %rax, _ZL12qstring_data(%rip)
       ; register the destructor
    movl    $__dso_handle, %edx
    movl    $_ZL12qstring_data, %esi
    movl    $_ZN7QStringD1Ev, %edi
    jmp __cxa_atexit   ; (tail call)

Here is the code of the constructor, which have been inlined.

We can also see that the code calls the function __cxa_atexit with the parameters$_ZL12qstring_data and $_ZN7QStringD1Ev Which are respectively the address of the QString object, and a function pointer of the QString destructor. In other words, this code registers the destructor of QString to be run on exit.
The third parameter $__dso_handle is a handle to this dynamic shared object (used to run the destructor when a plugin is unloaded for example).

What is the problem with global objects with constructor?

  • The order in which the constructors are called are not specified by the C++ standard. If you have dependencies between your global objects, you will run into trouble.
  • All the constructors of all the global in all the libraries need to be run before main() and slow down the startup of the application. (Even for objects that will never be used).

This is why it is not recommended to have global objects in libraries. Instead, one can use function static objects, which are initialized on the first use. (Qt provides a macro for that: Q_GLOBAL_STATIC which is made public in Qt 5.1.)

Here comes C++11

C++11 comes with a new feature: constexpr

That keyword can be used in two ways: If you specify that a function is a constexpr it means that the function can be run at compile-time.
If you specify that a variable is a constexpr, then it means it can be computed at compile time.

Let us slightly modify the example above and see what it does:

static inline constexpr int odd(int n) { return n*2 + 1; }
constexpr int odd_numbers[] = { odd(0), odd(1), odd(2), odd(3), odd(4) };

Two constexpr were added.

.section    .rodata
_ZL11odd_numbers:
    .long   1
    .long   3
    .long   5
    .long   7
    .long   9

Now they are generated at compile time.

If a class has a constructor that is declared as constexpr and has no destructor, you can have this as global object and it will be initialized at compile time.

Since Qt 4.8, there is a macro Q_DECL_CONSTEXPR which expands toconstexpr if the compiler supports it, or to nothing otherwise.

Proof Of Concept: Re-implementing Qt moc using libclang

$
0
0

I have been trying to re-write Qt's moc using libclang from the LLVM project.

The result is moc-ng. It is really two different things:

  1. A plugin for clang to be used when compiling your code with clang;
  2. and an executable that can be used as a drop in replacement for moc.

What is moc again?

moc is a developer tool which is part of the Qt library. It's role is to handle the Qt's extension within the C++ code to offer introspection and enable the Qt signals and slots.

What are clang and libclang?

clang is the C and C++ frontend to the LLVM compiler. It is not only a compiler though, it also contains a library (libclang) which helps to write a C++ parser.

Motivation

moc is implemented using a custom naive C++ parser which does just enough to extract the right information from your source files. The limitation is that it can sometimes choke on more complex C++ code and it is not compatible with some of the features provided by the new versions of the C++ standard (such as C++11 trailing return functions or advanced templated argument types)

Using clang as a frontend just gives it a perfect parser than can handle all the most complicated constructs allowed by C++.

Having it as a plugin for clang would also allow to pass meta-data directly to LLVM without going trough the generated code. Allowing to do things that would not be possible with generated code such as having Q_OBJECT in a function-locale class. (That's not yet implemented)

Expressive Diagnostics

Clang has also a very good diagnostics framework, which allows better error analysis.
Compare: The error from moc:

With moc-ng

See how I used clang's look-up system to check the existence of the identifiers and suggest typo correction, while moc ignores such error and you get a weird error in the generated code.

Meet moc-ng

moc-ng is my proof of concept attempt of re-implementing the moc using clang as a frontend. It is not officially supported by the Qt-project.

It is currently in alpha state, but is already working very well. I was able to replace moc and compile many modules of qt5, including qtbase, qtdeclarative and qt-creator.

All the Qt tests that I ran passed or had an expected failure (for example tst_moc is parsing moc's error output, which has now changed)

Compatibility with the official moc

I have tried as much as possible to stay compatible with the real moc. But there are some differences to be aware of.

Q_MOC_RUN

There is a Q_MOC_RUN macro that is defined when the original moc is run. It is typically used to hide to moc some complicated C++ constructs it would otherwise choke on. Because we need to see the full C++ like a normal compiler, we don't define this. This may be a problem when signals or slots or other Qt meta things are defined in a Q_MOC_RUN block.

Missing or not Self-Contained Headers

The official moc ignores any headers that are not found. So if include paths are not passed to moc, it won't complain. Also, the moc parser does not care if the type have not been declared, and it won't report any of those errors.

moc-ng has a stricter C++ parser that requires a self-contained header. Fortunately, clang falls back gracefully when there are errors, and I managed to turn all the errors into warnings. So when parsing a non self contained headers or if the include flags were wrong, one gets lots of warning from moc.

Implementation details and Challenges

I am now going to go over some implementation details and challenges I encountered.

I used the C++ clang tooling API directly, instead of using the libclang's C wrapper, even tough the C++ API does not maintain source compatibility. The reasons are that the C++ API is much more complete, and that I want to use C++. I did not want to write a C++ wrapper around a C wrapper around the C++ clang.
In my experience with the code browser (which is also using the C++ API directly), there is not so much API changes and keeping the compatibility is not that hard.

Annotations

The clang libraries parse the C++ and give the AST. From that AST, one can list all the classes and theirs method in a translation unit. It has all the information you can find from the code, with the location of each declarations.

But the pre-processor removed all the special macro like signals or slots. I needed a way to know which method are tagged with special Qt keywords.
At first, I tought I would use pre-processor hook to remember the location where those special macro are expended. That could have worked. But there is a better way. I got the idea from the qt-creator wip/clang branch which tries to use clang as a code model. They use attribute extension to annotate the methods. Annotations are meant exactly for this use case: annotate the source code with non standard extensions so a plugin can act upon. And the good news is that they can be placed exactly where the signals or slot keyword can be placed.

#define Q_SIGNAL  __attribute__((annotate("qt_signal")))
#define Q_SLOT    __attribute__((annotate("qt_slot")))
#define Q_INVOKABLE  __attribute__((annotate("qt_invokable")))

#define signals    public Q_SIGNAL
#define slots      Q_SLOT

We do the same for all the other macro that annotate method. But we still need to find something for macro that annotate classes: Q_OBJECT, Q_PROPERTY, Q_ENUMS
Those where a bit more tricky. And the solution I found is to use a static_assert, with a given pattern. However, static_assert is C++11 only and I want it to work without C++11 enabled. Fortunately clang accept the C11's _Static_assert as an extension on all the modes. Using this trick, I can walk the AST to find the specific static_assert that matches the pattern and get the content within a string literal.

#define QT_ANNOTATE_CLASS(type, anotation)  \
    __extension__ _Static_assert(sizeof (#anotation), #type);

#define Q_ENUMS(x) QT_ANNOTATE_CLASS(qt_enums, x)
#define Q_FLAGS(x) QT_ANNOTATE_CLASS(qt_flags, x)

#define Q_OBJECT(x)   QT_ANNOTATE_CLASS(qt_qobject, "") \ 
        /*... other Q_OBJECT declarations ... */

We just have to replace the Qt macros by our macros. I do that by injecting code right when we exit qobjectdefs.h which defines all the Qt macro.

Tags

QMetaMethod::tag allows the programmer to leave any tag for some extension in front of a method. It is not so much used. To my knowledge, only QtDBus relies on this feature for Q_NOREPLY.

The problem is that this relies on macro that are defined only if Q_MOC_RUN is not defined. So I had to hack to pre-processor hooks to see when we are defining macro in places that are conditioned on Q_MOC_RUN. I can do that because the pre-processor callback has hooks on #if and #endif so i can see if we are currently handling a block of code that would be hidden from the moc. And when one defines a macro there, I register it as possible tags. Later, when such macro is expended, I register their locations. For each method, I can then query to know if there was a tag on the same line. There is many cases where this would fail. But fortunately, tags are not a commonly used feature, and the simple cases are covered.

Suppressing The Errors

As stated, the Qt moc ignores most of the errors. I already tell clang not to parse the bodies of the functions. But you may still get errors if types used in declarations are not found. When moc-ng is run as a binary, it is desirable to not abort on those errors, for compatibility with moc. I did not find easy way to change errors into warnings. You can promote some warnings into errors or change fatal errors to normal errors, but you cannot easily suppress errors or change them into warnings.

What I did is create my own diagnostic consumer, which proxies the error to the default one, but turns some of them into warnings. The problem is that clang would still count them as error. So the hack I did was to reset the error count. I wish there was a better way.

When used as a plugin, there is only one kind of error that one should ignore, it is if there is an include "foo.moc" That file will not exist because the moc is not run. Fortunately, clang has a callback when an include file has not been found. If it looks like a file that should have been generated by moc (starting by moc_ or ending by .moc) then that include can be ignored.

Qt's Binary JSON

Since Qt5, there is a macro Q_PLUGIN_METADATA which you can use to load a JSON file, and moc would embed this JSON in some binary proprietary format which is used internally in QJsonDocument.

I did not want to depend on Qt (to avoid the bootstrap issue). Fortunately, LLVM already has a good YAML parser (which is a super-set of JSON), so parsing was not a problem at all. The problem was to generate Qt's binary format. I spend too much time trying to figure out why Qt would not accept my binary before noticing thatQJsonDocument enforces some alignment constraint on some items. Bummer.

Error Reporting within String Literal

When parsing the contents of things like Q_PROPERTY, I wish to report an error at the location it is in the source code. Using the macro described earlier, the content of Q_PROPERTY is turned in a string literal. Clang supports reporting errors within string literals in macros. As you can see on the screen shot, this works pretty well.

But there is still two level of indirection I would like to hide. It would be nice to hide some builtins macro from the diagnostic (I've hidden one level in the screenshot).
Also, I want to be able to report the location int the Q_PROPERTY line and not in the scratch space. But when using the # in macro, clang does not track the exact spelling location anymore.

Consider compiling this snippet with clang: It should warn you about the escape sequence \o, \p and \q not being valid. And look where the caret is for each warning

#define M(A, B)  A "\p" #B;
char foo[] = M("\o",   \q );

For \o and \p, clang puts the caret at the right place when the macro is expanded. But for \q, the caret is not put at its spelling location.

The way clang use to track the real origin of a source location is a very clever and efficient way. Each source location is represented by a clang::SourceLocation with is basically a 32 bit integer. The source location space is divided in consecutive entry that represents files or macro expansion. Each time a macro is expanded, there is a new macro expansion entry added, containing the source location of the expansion, and the location of the #define. In principle, there could be a new entry for each expended tokens, but consecutive entries are merged.
One could not do the same for strignified tokens because the string literal is only one token, but is coming from possibly many tokens. There are also some escaping rules to take in account that make it harder.

The way to do it is probably to leave the source location as they are, but having a special case for the scratch space while trying to find out the location of the caret.

Built-in includes

Some headers required by the standard library are not located in a standard location, but are shipped with clang and looked up in ../lib/clang/3.2/include relative to the binary.
I don't want to requires external files. I would like to just to have a simple single static binary without dependencies.

The solution would be to bundle those headers within the binary. I have nothing like qrc resources, but I can do the same in few lines of cmake

file(GLOB BUILTINS_HEADERS "${LLVM_BIN_DIR}/../lib/clang/${LLVM_VERSION}/include/*.h")
foreach(BUILTIN_HEADER ${BUILTINS_HEADERS})
    file(READ ${BUILTIN_HEADER} BINARY_DATA HEX)
    string(REGEX REPLACE "(..)" "\\\\x\\1" BINARY_DATA "${BINARY_DATA}")
    string(REPLACE "${LLVM_BIN_DIR}/../lib/clang/${LLVM_VERSION}/include/" 
                   "/builtins/" FN "${BUILTIN_HEADER}")
    set(EMBEDDED_DATA "${EMBEDDED_DATA} { \"${FN}\" , \"${BINARY_DATA}\" } , ")
endforeach()
configure_file(embedded_includes.h.in embedded_includes.h)

This will just go over all *.h files in the builtin include directory, read them in a hex string. and the regexp transforms that in something suitable in a C++ string literal. Then configure_file will replace @EMBEDDED_DATA@ by its value.
Here is how embedded_includes.h.in looks like:

static struct { char *filename; char *data; } EmbeddedFiles[] = {
    @EMBEDDED_DATA@
    {0, 0}
};

Conclusion

moc-ng was a fun project to do. Just like developing our C/C++ code browser. The clang/llvm frameworks are really powerfull and nice to work with.

Please have a look at the moc-ng project on GitHub orbrowse the source online.

Profiling PHP Applications (using ownCloud as an Example)

$
0
0

XDebug for PHP is a PHP extension that enables you (amongst other things) to create (KCachegrind compatible) profiling files. I'm showing here how we can use those to analyze ownCloud's performance.

Install XDebug for your PHP version, then add in php.ini:

xdebug.profiler_enable = On
xdebug.profiler_output_name = "cachegrind.out.%s"
xdebug.profiler_append = 1

With the append parameter we can have the profiling data appended to the file so we can see how it changes over time (values going up or down).

Now when clicking around in your ownCloud installation, you'd see those files getting created (for me in /var/tmp, usually in the directory where PHP stores its sessions):

-rw-r--r--  1 guruz  guruz   509K Aug 14 18:55 cachegrind.out._www_owncloud5_cron_php
-rw-r--r--  1 guruz  guruz    16M Aug 14 18:55 cachegrind.out._www_owncloud5_index_php
-rw-r--r--  1 guruz  guruz   1.1M Aug 14 18:55 cachegrind.out._www_owncloud5_remote_php

While you can use KCachegrind to look at the profiling output, there is also a way to have a look with the web-based tool Webgrind. I'm using this one since it is cross-platform.

Drop Webgrind into your web root, e.g. inside your ownCloud directory:

git clone https://github.com/jokkedk/webgrind.git webgrind
(Please note that it makes more sense to deploy webgrind somewhere else, since the above setting in the php.ini will enable also the profiling of Webgrind itself which will make it sloooooow).

Then you can visit /webgrind of your HTTP server, select one of the files and have a look at its numbers. The Total Inclusive Cost is the relevant number to look at, it displays how much time a function takes including all the things it calls.

In this example, I'm looking at the index.php trace (in cachegrind.out._www_owncloud5_index_php) where I want to optimize checkServer() in OC_Util. I see in the trace (and in the code) that this function is called for every request.

While it is not a super expensive function compared to the other ones, optimizing a few of those would add up!

This is before optimizing:

The goal of the checkServer() function is to perform a sanity test on the server configuration and bail out if there is a problem. We decided that it is enough to perform this sanity test once per session.

We can therefore store the result of this function in $_SESSION like you can see in this pull request on GitHub (ownCloud is using a wrapper called \OC::$session).

After applying this change and then clicking some more inside ownCloud's webinterface, we should see the cost inside Webgrind for checkServer() going down :-)
(Note that the function might even go away from the trace since by default Webgrind only shows 90% of the function ordered by expensiveness)

This is after optimizing:

Since optimizing is often about tradeoffs, in this case our session file will grow some bytes (checkServer_suceeded|b:1) but since it is loaded and parsed anyway for other values, we've settled that this is probably faster than what the checkServer() does currently.


Objective C (iOS) for Qt C++ Developers

$
0
0

For our first customer iOS application, I had to learn Objective C. Coming from the Qt world that was not too hard. To make life even easier for the readers of this blog, I am going to describe some of the things I have learnt. This is more of a brain dump than a tutorial, but I still hope it is useful for you.
I'll first write about the language differences and then about the class libraries.

Objective C vs C vs C++

Similarly to C++, Objective C is a superset of C (that is not 100% correct but a good enough statement to understand it) The file extensions you are using for the header is .h and .m for the implementation.
Note that there is also Objective-C++ with the file extension .mm, I will not write about that though.

Similar to Symbian C++, Objective C is using two phase construction: First you alloc the object on the heap, then you call one of the init methods on it. Often you can avoid having to call two methods and just use one of the static convenenience methods that directly give you back a newly allocated object (e.g. stringWithCString).

Quite different (and at first very distracting) is the method calling syntax in Objective C. There are normal C functions that you call in the usual C-ish way, e.g. NSLog(@"My log message");. But there is also the Objective C syntax for methods of objects. As an example, this is for a method on obj with two parameters: [obj methodName:param1value param2:param2Value]. Looks odd, but you'll get used to it. In Objective C, this is usually called sending a message, although I find that more confusing than just calling it methods.

In this method example above, methodName is the so called selector. A selector is the identifier of a method. Sometimes you will have to identify the method (similar to a function pointer), in the example above you could do that with @selector(methodName:param2:).

While in C++ there is no root object and in Qt QObject is only used for some objects, Objective C has the mandatory root objectNSObject. Contrary to Qt where you use QObject only for classes where you want signals/slots, here you use NSObject for everything.

ARC is the automatic reference counting implemented since iOS 5. Think of it like having an implicit QSharedPointer around your objects. It makes coding feel like you have a garbage collector. Internally, ARC tells the compiler to insert retain (increment reference count) and release (decrement reference count and eventually dealloc) statements in your code. I think this is awesome, you basically can't leak objects anymore if you stick to the normal way of doing things.

Properties in Objective C are similar to Q_PROPERTY. It means that you can use the nice obj.var = foo syntax in your code while internally a [obj setVar:foo] message is called (well.. message is sent). You can create a property with @property and have the compiler make a getter/setter for you using@synthesize. You can of course also have your custom getter/setter that has more logic inside, for example for implementing lazy initialization.

You can think of Objective C delegates as a set of slots. A delegate method in the delegate object is called by an object to notify that something has happened. This is very similar to Java's interfaces.

Equivalencies to the Qt classes

As important as the syntax is the associated libraries provided by iOS (and OS X). Read on to learn about the objects and functions provided by them.

An NSString object is a constant string. You can also create it by @"having it in 'at' followed by quotation marks". To have a mutable string, you have to use NSMutableString with itsappendString, appendFormat etc functions. Especially appendFormat is really useful. If you want to do replacing, stringByReplacingOccurrencesOfString is your friend and gives you back a new NSString object. For constructing a path on the file system, you can use stringByAppendingPathComponent.

Like in C++, basic types like int are not objects. If you need to wrap them inside an object, you can useNSNumber and NSValue (similar to QVariant). Also useful to know here: You can use the intValue methods of NSNumber, NSString etc to convert to an int.

NSArray, NSSet, NSDictionary are what they sound like: A place to store NSObjects.NSArray is equivalent to a QList, NSSet to QSet and NSDictionary is a QHash/QMap. You need to use their mutable variants to change them (e.g. NSMutableArray). If you want to store primitive types, you need to use NSNumber, NSValue etc to wrap them. For NSArray, you can access the objects by using objectAtIndex. For a NSDictionary, you'd usevalueForKey or objectForKey.

I haven't done much file IO, so I cannot write much about this here. There is NSFileManager for directory operations. You can very easily read a (smaller) file by using NSString's stringWithContentsOfFile or NSData's dataWithContentsOfFile. Remember that on OS X and iOS, one of your system levels below is POSIX, so you can also use the methods from there to get raw performance.

Speaking of NSData: It's your equivalent to QByteArray :-) For a mutable variant, have a look at NSMutableData. You can access the char* pointer via the data or mutableData methods.

I feel that most mobile applications nowadays are somehow using HTTP. In Qt you would use the QNetworkAccessManager for that. In Objective C, you use a NSMutableURLRequest(or NSURLRequest) inside a NSURLConnection. You need to set a delegate for the NSURLConnection. The delegate handles the asynchonous events that get produced when downloading (readyRead signal is connection:didReceiveData:, finished signal is connectionDidFinishLoading: etc).

If you want to do socket-based IO, I can only recommend to get GCDAsyncSocket. I've tried manual socket programming for iQuassel before and it sucked for several reasons, mainly since you have to use Carbon instead of Cocoa.GCDAsyncSocket also has the nice advantage that you can easily do your network protocol parsing in a thread and you avoid blocking the UI.

Speaking of threading: You can achieve basic concurrency by using NSObject's performSelectorInBackground. It makes a method run in a background thread. It can then communicate back its result to the main thread via performSelectorOnMainThread. If you want to do a Qt-tish 0 timer invocation, check the afterDelay: variant ofperformSelector, this has the selector run in an event loop invocation. More advanced things can be done with NSOperationQueue or Grand Central Dispatch.

The equivalent to a QEventLoop is a NSRunLoop. You can use that if you are processing something in another thread and need an event loop, for example for network IO.

NSUserDefaults is a nice way to store and load application settings (like you would do with QSettings).

Unfortunately there is no real equivalent to QtXmlPatterns. The libxml2 exists on the iOS devices, so you could use use that. For simple SAX-style parsing, at least there is NSXMLParser.

UI Things

XCode has a (quite buggy) visual designer (interface builder) included. With the designer you design a storyboard (UIStoryboard) that contains a number of view controllers (UIViewController subclasses). Each view controller manages a view (remember MVC?). The navigation between the views happens via segues (UIStoryboardSegue). You can move data between view controllers inside the prepareForSegue method of the source view controller. Each view controller has methods like viewWillAppear that get called by the OS when a specific event happens.

You use "outlets" to link instances of controls (e.g. a UIButton, UILabel) with their counterpart in the interface builder UI file. Use drag and drop with CTRL.

For Qt's itemviews, I cannot say much about the equivalents in the Apple world. Definitely have a look at UITableView which will use yourUITableViewDataSource and UITableViewDelegate for its contents. I'd say almost all iOS applications use a table view.

If you have been using a QWebView, you can substitute it with UIWebView.

The equivalents for QImage and QPixmap are CGDataProviderRef,NSImage and CGImage.

You can do custom drawing using the core graphics methods.UIGraphicsBeginImageContextWithOptions creates a context on which you can use drawing functions. If you want to get a bitmap out of the context, try UIGraphicsGetImageFromCurrentImageContext.

QRect, QPoint etc have their equivalents in CGRect, CGPoint and CGSize. To help you debugging, check NSStringFromCGRect and friends.

For the widgets, note that NS* UI classes are for OS X and UI* classes are for iOS.

Misc

When developing Qt, I'm using QtCreator. My Co-Founder Olivier is a KDevelop fan, which is also supposed to be very good. For iOS development, you can use Apple's free XCode.

The Apple engineers also provide you with some possibilities to do unit testing similar to QTest. In XCode, create a new test class and use methods like STAssertEquals, STAssertTrue etc.

Want to know more?

I guess the equivalent of the Qt Developer Network is StackOverflow. Most things that have ever been attempted in iOS programming have something on StackOverflow. There is also an internal Apple developer forum available.

And of course: If you need any help in porting Qt application to iOS.. well, that is one of the things Woboq can do for you! Just write to us.

Saving Disk Space on your Linux Server with Squashfs

$
0
0

We have been running our browsable code repository code.woboq.org for quite a while now. Adding more and more projects, at some point we noticed that we were getting low on disk space. In this blog post, we explain how we saved huge amount of disk space holding our static HTML files.

First some background on how we do things on the server: The subdirectories (like /linux) are mounted file system images (loop mount). This allows easy upload from a powerful machine where we generate the HTML, and reference files for the code browser. There is a huge amount of small files so using a file system image makes uploading easier and also allows us to update code.woboq.org in a more transactional way: You can just remount the image!

To improve our (lack of) disk space situation, we thought about how we could use compression. The current uncompressed size of code.woboq.org was about 25 GB on an ext4 file system images. A natural idea would be to switch to Btrfs images (which can do compression). However our kernel does not support Btrfs.

Next idea was to use the power of FUSE, the file system in user space. Our kernel supports FUSE, so we didn't have to do a recompile and reboot in this case.

We looked at fuse-zip first, a way to mount ZIP archives as a directory. However, we found out after some time that the fuse-zip version in our Linux distro does not support ZIP64 yet. This means the huge (in terms of inode count) directories that the code browser generator can create were not supported.

So if we would have needed to compile fuse-zip ourselves (in a more current version) anyway, we thought: Maybe there is an even better way than mounting a ZIP archive. After all it was never the intention of the ZIP format to have people use it as a file system.

Turns out there is a better way! We remembered that a lot of embedded devices and Linux Live CDs also need to save space. They often use Squashfs for that. So that's what we decided to use too. Our kernel does not support Squashfs so we are using the FUSE module squashfuse and so far are quite happy with it.

Generating the image (on local machine) is as simple as:

mksquashfs qt5/ qt5.img

Then we just have to upload it to the server and mount it as

squashfuse -o allow_other qt5.img ~/public_html/qt5

A size comparison of the /qt5 tree:

Original ext4 image~5 GB█████████████████████████
ZIP file~470 MB██
Squashfs image~280 MB

Yes, that is a factor 18x compression for Squashfs!

Regarding the performance, we have not found any drawbacks yet. Possibly Squashfs is even faster since less data needs to be read from the slow hard drive, making the slowdown that the decompression must cause irrelevant.

If you want to look at the implementation of the squashfs linux driver, you canbrowse it in our code browser.

Can Qt's moc be replaced by C++ reflection?

$
0
0

The Qt toolkit has often been criticized for extending C++ and requiring a non-standard code generator (moc) to provide introspection.
Now, the C++ standardization committee is looking at how to extend C++ with introspection and reflection. As the current maintainer of Qt's moc I thought I could write a bit about the need of Qt, and even experiment a bit.

In this blog post, I will comment on the current proposal draft, and try to analyze what one would need to be able to get rid of moc.

If you read this article from the RSS or a planet, you may want to open it in itsoriginal URL to see property formatted code.

Current draft proposal

Here is the draft proposal: N3951: C++ type reflection via variadic template expansion. It is a clever way to add compile time introspection to C++. It gives new meaning to typedef and typename such that it would work like this:

/* Given a simple class */classSomeClass { public:intfoo();voidbar(intx);
};#if 0/* The new typename<>... and typedef<>... 'operators' : */
  vector<string> names = { typename<SomeClass>... } ;auto members = std::make_tuple(typedef<SomeClass>...) ;#else/* Would be expanded to something equivalent to: */vector<string> names =  { "SomeClass",  "foo", "bar" };automembers = std::make_tuple(static_cast<SomeClass*>(nullptr), &SomeClass::foo, &SomeClass::bar);#endif

We can use that to go over the member of a class at compile time and do stuff like generating a QMetaObject with a normal compiler.

With the help of some more traits that is a very good start to be able to implement moc features in pure C++.

The experiment

I have been managing to re-implement most of the moc features such as signals and slots and properties using the proposals, without the need of moc. Of course since the compiler obviously doesn't have support for that proposal yet, I have been manually expanding the typedef... and typename... in the prototype.

The code does a lot of template tricks to handle strings and array at compile time and generates a QMetaObject that is even binary compatible to the one generated by moc

The code is available here.

About Qt and moc

Qt is a cross platform C++ toolkit specialized for developing applications with user interfaces. Qt code is purely standard C++ code, however it needs a code generator to provide introspection data: the Meta Object Compiler (moc). That little utility parses the C++ headers and generates additional C++ code that is compiled alongside the program. The generated code contains the implementations of the Qt signals, and builds the QMetaObject (which embeds string tables with the names of all methods and properties).

Historically, the first mission of the moc was to enable signals and slots using a nice syntax. It is also used for the property system. The first use of the properties was for the property editor in Qt designer, then it became used for integration with a scripting language (QtScript), and is now widely used to access C++ objects from QML.

(For an explanation of the inner working of the signals and slots, read one of my previous articles:How Qt signals and slots work.)

Generating the QMetaObject at compile time

We could ask the programmer to add a macro in the .cpp such as Q_OBJECT_IMPL(MyObject) which would be expanded to that code:

constQMetaObjectMyObject::staticMetaObject = createMetaObject<MyObject>();constQMetaObject *MyObject::metaObject() const { return&staticMetaObject; }intMyObject::qt_metacall(QMetaObject::Call_c, int_id, void** _a) {returnqt_metacall_impl<MyObject>(this, _c, _id, _a);
}voidMyObject::qt_static_metacall(QObject *_o, QMetaObject::Call_c, int_id, void** _a) {qt_static_metacall_impl<MyObject>(_o, _c, _id, _a);
}

The implementation of createMetaObject uses the reflection capabilities to find out all the slots, signals and properties in order to build the metaobject at compile time. The functionqt_metacall_impl and qt_static_metacall_impl are generic implementations that use the same data to call the right function. Click on the function name if you are interested in the implementation.

Annotating signals and slots

We could perhaps use C++11 attributes for that. In that case, it would be convenient if attributes could be placed next to the access specifiers. (There is already a proposal to add group access specifiers, but it does not cover the attributes.)

class MyObject : public QObject {
    Q_OBJECTpublic [[qt::slot]]:void fooBar();void otherSlot(int);public [[qt::signal]]:void mySignal(int param);public:enum [[qt::enum]] Foobar { Value1, Value2  };
};

Then we would need compile time traits such as has_attribute<&MyObject::myFunction>("qt::signal")

Function traits

I just mentioned has_attribute. Another trait will be needed to determine if the function is public, protected or private.
The proposal also mentioned we could use typename<&MyObject::myFunction>... to get the parameter names. We indeed need them as they are used when you connect to a signal in QML to access the parameters.
And currently we are able to call a function without specifying all the parameters if there are default parameters. So we need to know the default parameters at compile time to create them at run time.

However, there is a problem with functions traits in that form: non-type template parameters of function type need to be function literals. (See this stackoverflow question.) Best explained with this code:

structObj { voidfunc(); };template<void (Obj::*)()> structTrait {};intmain() {Trait<&Obj::func> t1;  //Ok.  The function is directly writtenconstexprautovar = &Obj::func;
    Trait<var> t2;  //Error:  var is not a function directly written.
}

But as we are introspecting, we get, at best, the functions in constexpr form. So this restriction would need to be removed.

The properties

We have not yet solved the Q_PROPERTY feature.

I'm afraid we will have to introduce a new macro because it is most likely not possible to keep the source compatibility with Q_PROPERTY. A way to do it would be to add static constexpr members of a recognizable type. For example, this is my prototype implementation:

template<typename Type, typename... T> structQProperty : std::tuple<T...> {usingstd::tuple<T...>::tuple;usingPropertyType = Type;
};template<typename Type, typename... T> constexprautoqt_makeProperty(T&& ...t)
{ returnQProperty<Type, typenamestd::decay<T>::type...>{ std::forward<T>(t)... }; }#define Q_PROPERTY2(TYPE, NAME, ...) static constexpr auto qt_property_##NAME = \qt_makeProperty<TYPE>(__VA_ARGS__);

To be used like this

Q_PROPERTY2(int, foo, &MyObject::getFoo, &MyObject::setFoo)

We can find the properties by looking for the QProperty<...> members and removing the "qt_property_" part of the name. Then all the information about the getter, setter and others are available.

And if we want to keep the old Q_PROPERTY?

I was wondering if it is possible to even keep the source compatibility using the same macro: I almost managed:

template<typename... Fs> structQPropertyHolder { template<Fs... Types> structProperty {}; };template<typename... Fs> QPropertyHolder<Fs...> qPropertyGenerator(Fs...);#define WRITE , &ThisType::#define READ , &ThisType::#define NOTIFY , &ThisType::#define MEMBER , &ThisType::#define Q_PROPERTY(A) Q_PROPERTY_IMPL(A) /* expands the WRITE and READ macro */#define Q_PROPERTY_IMPL(Prop, ...) static void qt_property_ ## __COUNTER__(\    Prop, decltype(qPropertyGenerator(__VA_ARGS__))::Property<__VA_ARGS__>) = delete;classMyPropObject : publicQObject {Q_OBJECTtypedefMyPropObjectThisType; // FIXME: how do do that automatically//        from within the Q_OBJECT macro?signals: // would expand to public [[qt::signal]]:voidfooChanged();public:QStringfoo() const;voidsetFoo(constQString&);Q_PROPERTY(QStringfooREADfooWRITEsetFooNOTIFYfooChanged)

};

This basically creates a function with two arguments. The name of the first argument is the name of the property, which we can get via reflection. Its type is the type of the property. The second argument is of the type QPropertyHolder<...>::Property<...>, which contains pointers to the member functions for the different attributes of the property. Introspection would allow us to dig into this type.
But the problem here is that it needs to do a typedef ThisType. It would be nice if there was something like deltype(*this) that would be working in the class scope without any members, then we would put this typedef within the Q_OBJECT macro.

Re-implementing the signals

This is going to be the big problem as I have no idea how to possibly do that. We need, for each signal, to generate its code. Something that could look like this made-up syntax:

int signalId=0;/* Somehow loop over all the signals to implement them  (made up syntax) */for(auto signal : {typedef<MyObject requires has_attribute("qt::signal")>... }) {
    signalId++;
    signal(auto... arguments) = { 
        SignalImplementation<decltype(signal), signalId>::impl(this, arguments...); 
    }
}

The implementation of SignalImplementation::impl is then easy.

Summary: What would we need

In summary, this is what would be needed in the standard to implement Qt like features without the need of moc:

  • The N3951 proposal: C++ type reflection via variadic template expansion would be a really good start.
  • Allow attributes within the access specifier (public [[qt::slot]]:)
  • Traits to get the attributes (constexpr std::has_attribute<&MyClass::mySignal>("qt::signal");
  • Traits to get the access of a function (public, private, protected) (for QMetaMethod::access)
  • A way to declare functions.
  • Getting default value of arguments.
  • Accessing function traits via constexpr expression.
  • Listing the constructors. (for Q_INVOKABLE constructors.)

What would then still be missing

  • Q_PLUGIN_METADATA which allows to load a JSON file, and put the information in the binary:
    I'm afraid we will still need a tool for that. (Because I hardly see the c++ compiler opening a file and parsing JSON.) This does not really belong in moc anyway and is only there because moc was already existing.
  • Whatever else I missed or forgot. :-)

Conclusion: will finally moc disappear?

Until Qt6, we have to maintain source and binary compatibility. Therefore moc is not going to disappear, but may very well be optional for new classes. We could have a Q_OBJECT2 which does not need moc, but would use only standard C++.

In general, while it would be nice to avoid the moc, there is also no hurry to get rid of it. It is generally working fine and serving its purpose quite well. A pure template C++ implementation is not necessarily easier to maintain. Template meta-programming should not be abused too much.

For a related experiment, have a look at my attempt to reimplement moc using libclang

Solving the Unavoidable Race

$
0
0

This is the story how I have (not) solved a race condition that impacts QWaitCondition and is also present on every other condition variable implementations (pthread, boost, std::condition_variable).

bool QWaitCondition::wait(int timeout) is supposed to return true if the condition variable was met and false if it timed out. The race is that it may return false (for timeout) even if it was actually woken up.

The problem was already reported in 2012. But I only came to look at it when David Faure was trying to fix another bug in QThreadPool that was caused by this race.

The problem in QThreadPool

When starting a task, QThreadPool did something along the lines of:

QMutexLockerlocker(&mutex);taskQueue.append(task); // Place the task on the task queueif (waitingThreads> 0) {// there are already running idle thread. They are waiting on the 'runnableReady'    // QWaitCondition. Wake one up them up.waitingThreads--;runnableReady.wakeOne();
} elseif (runningThreadCount< maxThreadCount) {startNewThread(task);
}

And the the thread's main loop looks like this:

voidQThreadPoolThread::run()
{QMutexLockerlocker(&manager->mutex);while (true) {/* ... */if (manager->taskQueue.isEmpty()) {// no pending task, wait for one.boolexpired = !manager->runnableReady.wait(locker.mutex(), manager->expiryTimeout);if (expired) {manager->runningThreadCount--;return;
      } else {continue;
      }
    }QRunnable *r = manager->taskQueue.takeFirst();// run the tasklocker.unlock();r->run();locker.relock();
  }
}

The idea is that the thread will wait for a given amount of second for a task, but if no task was added in a given amount of time, the thread expires and is terminated. The problem here is that we rely on the return value of runnableReady. If there is a task that is scheduled at exactly the same time as the thread expires, then the thread will see false and will expire. But the main thread will not restart any other thread. That might let the application hang as the task will never be run.

The Race

Many of the implementations of a condition variable have the same issue.
It is even documented in the POSIX documentation:

[W]hen pthread_cond_timedwait() returns with the timeout error, the associated predicate may be true due to an unavoidable race between the expiration of the timeout and the predicate state change.

pthread documentation describes it as an unavoidable race. But is it so? The wait condition is associated with a mutex, which is locked by the user when calling wake() and that is also passed locked to wait(). The implementation is supposed to unlock and wait atomically.

The C++11 standard library's condition_variable even has an enum (cv_status) for the return code. The C++ standard does not document the race, but all the implementations I have tried suffer from the race. (No implementations are therefore conform.)

Let me try to explain the race better: this code show a typical use of QWaitCondition

Thread 1Thread 2
mutex.lock();if(!ready) {ready = true;condition.wakeOne();
  }mutex.unlock();
mutex.lock();ready = false;boolsuccess = condition.wait(&mutex, timeout);assert(success == ready);mutex.unlock();

The race is that the wait condition in Thread2 timeout and returns false, but at the same time, Thread1 wakes the condition. One could expect that since everything is protected by a mutex, this should not happen. Internally, the wait condition unlocks the internal mutex, but does not check that it has not been woken up once the user mutex is locked again.

QWaitCondition has internal state that counts the number of waiting QWaitCondition and the number of QWaitCondition that are waiting to be woken up.
Let's review the actual code of QWaitCondition (edited for readability)

boolQWaitCondition::wait(QMutex *mutex, unsignedlongtime)
{// [...]pthread_mutex_lock(&d->mutex);
    ++d->waiters;mutex->unlock();// (simplified for briefty)intcode = 0;do {code = d->wait_relative(time); // calls pthread_cond_timedwait
    } while (code == 0&& d->wakeups == 0);
    --d->waiters;if (code == 0)
      --d->wakeups; // [!!]pthread_mutex_unlock(&d->mutex);mutex->lock();returncode == 0;
}voidQWaitCondition::wakeOne()
{pthread_mutex_lock(&d->mutex);d->wakeups = qMin(d->wakeups + 1, d->waiters);pthread_cond_signal(&d->cond);pthread_mutex_unlock(&d->mutex);
}

Notice that d->mutex is a native pthread mutex, while the local variable mutex is the user mutex. In the line marked with [!!] we effectively take the right to wake up. But we do that before locking the user's mutex. What if we checked again for waiters under the user's lock?

Attempt 1: check again under the user's lock

boolQWaitCondition::wait(QMutex *mutex, unsignedlongtime)
{// Same as before:pthread_mutex_lock(&d->mutex);
    ++d->waiters;mutex->unlock();intcode = 0;do {code = d->wait_relative(time); // calls pthread_cond_timedwait
    } while (code == 0&& d->wakeups == 0);//    --d->waiters; // Moved bellowif (code == 0)
      --d->wakeups;pthread_mutex_unlock(&d->mutex);mutex->lock();//  Now check the wakeups again:pthread_mutex_lock(&d->mutex);
    --d->waiters;if (code != 0&& d->wakeups) {// The race is detected, and corrected
      --d->wakeups;code = 0;
    }pthread_mutex_unlock(&d->mutex);returncode == 0;
}

And there we have fixed the race! We just had to lock the internal mutex again because d->waiters and d->wakeups need to be protected by it. We needed to unlock it because locking the user's mutex with the internal mutex locked would potentially cause deadlock as lock order would not be respected.

However, we now have introduced another problem: If there are three threads, a thread may be woken up before

//    Thread 1              // Thread 2             // Thread 3
mutex->lock()
cond->wait(mutex);
                            mutex->lock()
                            cond->wake();
                            mutex->unlock()
                                                    mutex->lock()
                                                    cond->wait(mutex, 0);

We don't want that the Thread 3 steal the signal from the Thread 1. But that can happen if the Thread 1 is sleeping a bit too long and do not manage to lock the internal mutex in time before Thread 3 expires.

The only way to solve this problem would be if we could order the thread by the time they started to wait.
Inspired by the bitcoin's blockchain, I created a linked list of nodes on the thread's stack that represent the order. When a thread is starting to wait, it adds itself at the end of the double linked list. When a thread is waking other thread, it marks the last node of the linked list. (by incrementing a woken counter inside the node). When a thread is timing out, it checks if it was marked, or any other thread after him in the linked list. We only solve the race in that case, otherwise we consider it is a timeout.

You can see the patch on the code review tool.

Performance

This patch adds quite a bit of code to add and remove nodes in the linked list, and also to go over the list to check if we were indeed woken up. The linked list is bound by the number of waiting thread. I was expecting that this linked list handling would be negligible compared to the other cost of QWaitCondition

However, the results of the QWaitCondition benchmark show that, with 10 threads and high contention, we have a ~10% penalty. With 5 threads there is ~5% penalty.

Is it worth it to pay this penalty to solve the race? So far, we decided not to merge the patch and keep the race.

Conclusion

Fixing the race is possible, bug has a small performance impact. None of the implementations attempt to fix the race. I wonder why there is even a returned status at all if you cannot rely on it.

C++14 for Qt programmers

$
0
0

C++14 is the name of the version of the standard to be released this year. While C++11 has brought many more feature that took time to be implemented by the compilers, C++14 is a much lighter change that is already implemented by compilers such as clang or gcc.

Qt 5 already was adapted in many ways so you can make use of the new features of C++11. You can read about that in my previous article. C++11 in Qt5. This article mention some of the changes in C++14 and the impact on Qt users.

Generic lambda

C++11 introduced lambda function, and Qt5 allow you to connect signals to them with the new connect syntax. C++14 simplify the use of lambda function as the arguments can be automatically deduced. You can use auto as parameter type instead of explicitly writing the type.

 connect(sender, &Sender::valueChanged, [=](const auto &newValue) {
        receiver->updateValue("senderValue", newValue);
    });

Internally, lambda function is just a functor object with an operator(). With generic lamdba, that operator is now a templated function. I had to make a change which was included in Qt 5.1 already to support such functors.

C++14 also adds the possibility to have expressions in the capture.

 connect(sender, &Sender::valueChanged, [reciever=getReciever()](const auto &newValue) {
        receiver->updateValue("senderValue", newValue);
    });

Relaxed Constant expressions

C++11 came with the new constexpr keyword. Qt 4.8 has added a new macro Q_DECL_CONSTEXPR that expands to constexpr when supported, and we have been using it for many function when possible in Qt 5.

C++14 is relaxing the rules of what is allowed in a constexpr. C++11 rules were only allowing a single return statement, and could only be applied on const functions. C++14 allow pretty much any code that can be evaluated at compile time.

/* This function was not valid in C++11 because it is composed of several statements,
 * it has a loop, and a local variable. It is now allowed in C++14 */
constexpr int myFunction(int v) {
  int x = 1;
  while (x < v*v)
    x*=2;
  return x;
}

Member functions declared as constexpr in C++11 were automatically considered as const. It is no longer the case as non-const function can also be constexpr.
The result of this change is that constexpr member functions that were not explicitly marked as const will change const-ness in C++14, and this is a binary incompatible change. Fortunately in Qt, all Q_DECL_CONSTEXPR member functions were also explicitly declared as const to keep binary compatibility with non C++11 code.

So now we can start annotating non-const functions such as operator= of many classes. For this reason, Qt 5.5 will come with a new macro Q_DECL_RELAXED_CONSTEXPR which expands to constexpr when the compiler is in a C++14 mode. We will then be able to start annotating relevant functions with Q_DECL_RELAXED_CONSTEXPR

Small features

C++14 also comes with a lot of small convenience feature. That do not have direct impact on Qt, but can be used in your program if you enable C++14. We just made sure that tools like moc can handle them.

Group Separators in Numbers

If you are writing huge constant in your code, you can now now use ' as a group separator:

    int i = 123'456'789;

Binary literal

In C++ you can write your number in decimal, octal (starting your number with 0), hexadecimal (starting with 0x). You can now also write in binary by using the 0b prefix.

    int i = 0b0001'0000'0001;

Automatic return type detection

If you have an inline function, you can use auto as a return type, and you do no longer need to specify it. The compiler will deduce it for you

// return type auto detected to be 'int'
auto sum(int a, int b) { return a+b; }

This is, however, not supported for slot or invokable method asmoc would not be able to detect the return type

Variable template

You could have functions template or class template. Now you can also have variable template.

template<typename T> const T pi = 3.141592653589793;
/*...*/
    float f = pi<float>;
    double d = pi<double>;

Uniform initialization of structures with non static data member initializers

In C++11, you can use the uniform initialization to initialize a struct that has no constructor by initializing all the members. C++11 also added the possibility to have inline non static data member initiazers directly in the class declaration. But you could not use the two at the same time. In C++14, you can. This code works and do what you would expect:

struct MyStruct {
    int x;
    QString str;
    bool flag = false;
    QByteArray str2 = "something";
};

    // ...
    // did not compile in C++11 because MyStruct was not an "aggregate" 
    MyStruct s = { 12, "1234", true };
    Q_ASSERT(s.str2 == "something");

Reference Qualifiers

This is not a C++14 feature, but a C++11 change. But we only started to make use of this late in the Qt5 cycle and I did not mention it in a previous blog post so I'll mention it here.

Consider this code:

    QString lower = QString::fromUtf8(data).toLower();

fromUtf8 returns a temporary. It would be nice if the toLower could re-use the memory allocated by the string and do the transformation in place. Well that's what the reference qualifiers for member functions are for.

(code simplified from qstring.h:)

class QString {
public:
    /* ... */

    QString toLower() const &
    { /* ... returns a copy with lower case character ... */ }
    QString toLower() &&
    { /* ... do the conversion in-place ... */ }
    /* ... */
};

Notice the '&' and '&&' at the end of toLower. Those are references qualifier and let overload a function depending on the reference type of the 'this' pointer, just like the const qualifier let overload on the constness of this. When toLower is called on a temporary (a rvalue reference) the second overload (the one with &&) will be chosen and the transformation will be done in place.

The functions that benefit from the optimisation in Qt 5.4 are:QString::toUpper, QString::toLower, QString::toCaseFolded, QString::toLatin1, QString::toLocal8Bit, QString::toUtf8, QByteArray::toUpper, QByteArray::toLower, QImage::convertToFormat, QImage::mirorred, QImage::rgbSwapped, QVersionNumber::normalized, QVersionNumber::segment

Changes in the standard library.

C++11 and C++14 have added a lot of feature to the standard library, competing with many of the features of QtCore. However, Qt makes little use of the standard library. In particular, we do not want to have the standard library as part of the ABI. This would allow to stay binary compatible even when the standard library is changed (example libstdc++ vs. libcpp). Also, Qt still supports older platforms that do not have the C++11 standard library. This really limits our uses.

Yet, Qt5 deprecated its own algorithms library and is now recommending to use the algorithms from the STL (example, std::sort instead of qSort).

Conclusion

It may still take some time before you can use those features in your project. But I hope that, by now, you started using C++11 features like many others projects did (Qt Creator, KDE, LLVM).

MSVC will enables C++14 by default with their new compilers, but clang and gcc require a special compilation flag (currently -std=c++1y). With qmake, you can enable your project to build with C++14 since Qt 5.4 by using this option:

CONFIG += c++14

Nicer debug output using QT_MESSAGE_PATTERN

$
0
0

If you are using Qt, you might have some qDebug or qWarning statements in your code. But did you know that you can greatly improve the output of those with the QT_MESSAGE_PATTERN environment variable? This blog post will give you some hints and examples of what you can do.

The default message pattern just prints the message (and the category if one was specified), but qDebug has the possibility to output more information. You can display cool things like the line of code, the function name or more by using some placeholders in the pattern.

QT_MESSAGE_PATTERN="%{message}"

Some example of placeholder:

  • %{file} and %{line} are the location of the qDebug statement (file and line number)
  • %{function} just shows the function name. Contrary to the Q_FUNC_INFO, which is really the raw function name, this shows a short prettier version of the function name without the arguments or not so useful decorators
  • %{time [format]} shows the time, at which the debug statement is emitted. Using the format you can show the time since the process startup, or an absolute time, with or without the date. Having the milliseconds in the debug output is helpful to get timing information about your code
  • %{threadid}, %{pid}, %{appname} are useful if the logs are mixed between severals application, or to find out from which thread something is run.
  • And you can find even more placeholders in the documentation.

Colorize it!

In order to make the output much prettier and easier to read, you can add some color by the mean of terminal escape sequences

Putting the escape sequence in an environment variable might be a bit tricky. The trick I use which work with bash or zsh, is to use echo -e in single back quotes.

export QT_MESSAGE_PATTERN="`echo -e "\033[34m%{function}\033[0m: %{message}"`"

That example will print the function in blue, and then the message in the normal color

Conditions

KDE's kDebug has colored debug output support since KDE 4.0 (it is enabled by setting the KDE_COLOR_DEBUG environment variable). It printed the function name in blue for normal debug messages, and in red for warnings or critical messages. I wanted the same in Qt, so some placeholders were added to have an output that depends on the type of message.

The content of what is between %{if-debug} and %{endif} will only be used for qDebug statements but not for qWarning. Similarly, we have %{if-warning} and %{if-critical}. There is also %{if-category} that will only be displayed if there is a category associated with this message.

Backtrace (linux-only)

On Linux, it is possible to show a short backtrace for every debug output.

Use the %{backtrace} placeholder, which can be configured to show more or less call frames.

In order for Qt to be able to determine the backtrace, it needs to find the symbol names from the symbol table. By default, this is only going to display exported functions within a library. But you can tell the linker to include this information for every function. So if you wish to use this feature, you need to link your code with the -rdynamic option.

Add this in your .pro file if you are using qmake:

QMAKE_LFLAGS += -rdynamic

Remember while reading this backtrace that symbols might be optimized away by the compiler. That is the case for inline functions, or functions with the tail-call optimization
See man backtrace.

Examples of patterns

And now, here are a few ready to use patterns that you can put in your /etc/profile, ~/.bashrc, ~/.zshrc or wherever you store your shell configuration.

KDE4 style:
export QT_MESSAGE_PATTERN="`echo -e "%{appname}(%{pid})/(%{category}) \033\[31m%{if-debug}\033\[34m%{endif}%{function}\033\[0m: %{message}"`"

Time in green; blue Function name for debug; red 3-frames backtrace for warnings. Category in yellow in present:
export QT_MESSAGE_PATTERN="`echo -e "\033[32m%{time h:mm:ss.zzz}%{if-category}\033[32m %{category}:%{endif} %{if-debug}\033[34m%{function}%{endif}%{if-warning}\033[31m%{backtrace depth=3}%{endif}%{if-critical}\033[31m%{backtrace depth=3}%{endif}%{if-fatal}\033[31m%{backtrace depth=3}%{endif}\033[0m %{message}"`"

Note that since Qt 5.4, the information about the function name of the file location is only available if your code is compiled in debug mode or if you define QT_MESSAGELOGCONTEXT in your compiler flags. For this reason %{backtrace depth=1} might be more accurate than %{function}

Don't hesitate to post your own favorite pattern in the comments.

Final words

The logging system has become quite powerful in Qt5. You can have categories and hooks. I invite you to read the documentation for more information about the debugging option that are at your disposal while using Qt.

QMetaType knows your types

$
0
0

QMetaType is Qt's way to have run-time dynamic information about your types. It enables things such as QVariant wrapping of custom types, copy of queued connection arguments, and more.

If you ever wondered what does Q_DECLARE_META_TYPE or qRegisterMetaType do and when to use (or not to use) them, read on. This article will describe what you need to know about QMetaType: What is its purpose; How to use it; And how it works.

Why does Qt need runtime dynamic type information?

Let's start with a bit of history. QMetaType was introduced in Qt 4.0. It was created in order to have the possibility to have asynchronous signals and slots (Qt::QueuedConnection). For queued slots to work, we have to copy the arguments and store them in an event that will be processed later. We also need to delete those copies when we are finished invoking the slot. (Note: This is not needed when using Qt::DirectConnection: pointers to arguments directly on the stack are used.)

The code dispatching signals in QMetaObject::activate has an array of pointers to arguments void*. (For more info read how signals and slots work). But, at the time, all Qt knows about the argument types are their name as a string, extracted by moc.

QMetaType provides a way from the string (e.g. "QPoint") to get a to copy or destroy the object. Qt would then use void *QMetaType::create(int type, void *copy) andQMetaType::destroy(int type, void *data) to copy and destroy the arguments, where the int type is obtained using QMetaType::type(const char *typeName) using the type name of the argument, as provided by moc. QMetaType also provides a way for the developer to register any kind of type in the meta type database.

Another use case for QMetaType is QVariant. The QVariant from Qt 3.x only supported built-in Qt types because a contained arbitrary type would also need to be copied or destroyed together with the wrapping QVariant. But with the help of QMetaType, QVariant was able to contain any registered type since QVariant can now copy and destroy contained instances of objects.

What information does QMetaType keep?

Since Qt 4.0 a lot has changed. We now have QtScript and QML which are making intensive use of the dynamic type integration. And we had to optimize a lot.

Here is the list of information kept for each type in the meta-type system:

  • The Type Name as registered. There is a name index for fast lookup of the meta type id. Since Qt 4.7, it is even possible to register the same type with different names (useful for typedefs).
  • (Copy) Constructor and Destructor (in-place or not).
  • Size to know how much space to allocate for a stack or inline member construction.
  • Flags specifying the same information as QTypeInfo (see bellow) or the type of conversion.
  • Custom conversion functions, set by QMetaType::registerConverter.
  • QMetaObject, containing the meta QObject data associated with a type if it exists.

QTypeInfo

QTypeInfo is a trait class orthogonal to QMetaType, it allows the developer to manually specify (using the Q_DECLARE_TYPEINFO) that a type is movable (using memmove) or if its constructor/destructor need to be run. This is mainly used for optimization in containers like QVector.

For example, the implicitly shared classes may be moved with memmove. While a normal copy should first increase the reference count with the copy constructor and then decrease it in the destructor.

C++11 introduces move constructors and standard type traits to solve this problem, but since QTypeInfo was designed way before C++11 and Qt still has to work with older compiler, we have to do without.

How does it work?

For historical reason, there is a big difference between built-in types and custom types. For built-ins types in QtCore, each meta-type function is basically a switch that has special code for each type. In Qt 5.0 this was re-factored to use templates a lot. (See QMetaTypeSwitcher.) But what is going to interest us in this article is how it works for custom registered types.

There is simply a QVector<QCustomTypeInfo> that holds all the information and a bunch of function pointer.

The Q_DECLARE_METATYPE macro.

That macro specializes the template class QMetaTypeId for the specific type. (In fact, it actually specializes the class QMetaTypeId2 and most of the code uses QMetaTypeId2. I don't know the exact reason behind QMetaTypeId2. Maybe so that Qt can add more built-in types without breaking code that used Q_DECLARE_METATYPE before.)

QMetaTypeId is used to determine the meta-type id at compile time for a type.
QMetaTypeId::qt_metatype_id is the function called by the qMetaType<T>(). On the first call of this function, it will call to some internal function within QMetaType to register and allocate a meta-type id for this type, using the name specified in the macro. It will then store that id in a static variable.

Apart from the name, all other information is automatically inferred by the compiler using templates.

qRegisterMetaType

Type registered using Q_DECLARE_METATYPE are going to be actually registered (and be assigned an id) on the first use of qMetaTypeId(). That's the case when a type is wrapped in a QVariant for example. But this is not yet registered when connecting signals and slots. In that case you need to force the first use using qRegisterMetaType

Automatic registration.

Developers often forget to register their meta-type until they see the compilation error or run-time error telling them to do so. But wouldn't it be nice if it would not be necessary? The only reason Q_DECLARE_METATYPE is necessary is to get the name. But there are cases where we can know the name at run-time without that macro. For example, QList<T> if T is already registered, we can query the meta-type system and construct the name using "QList<" + QMetaType::name(qMetaTypeId<T>()) + ">"
We do that for a bunch of templated classes, for example: QList, QVector, QSharedPointer, QPointer, QMap, QHash, ...
We can also determine the name of a pointer to a QObject subclass thanks to the information provided by moc:T::staticMetaObject.className() + "*"
And from Qt 5.5, we will also automatically declare the Q_GADGET and Q_ENUM.

That was it for Q_DECLARE_METATYPE, but you would still need to call qRegisterMetaType to use these type in a Q_PROPERTY or as a parameter in a signal/slot queued connection. Since Qt 5.x however, the code generated by moc will call qRegisterMetaType for you if moc can determine that the type may be registered as a meta-type.

Research

Before Qt 5.0, I was trying to investigate if we would not be able to get rid of Q_DECLARE_METATYPE for cases in which we do not need the name. This worked somehow like this:

template<typename T> QMetaTypeId {
    static int qt_metatype_id() {
        static int typeId = QMetaType::registerMetaType(/*...*/);
        return typeId;
    }
};

According to the C++ standard, there shall be exactly one instance of the variable QMetaTypeId::qt_metatype_id()::typeId for each type. But in practice some compilers or linkers do not obey this rule. In particular, on Windows, there would be one instance per library even when using the proper export macro. We would therefore always need a name identifier which we don't have. (And we don't want to rely on RTTI). Therefore we only register the type for which we can know the name in Qt 5.


Smooth animations using the QtQuick Canvas

$
0
0

Google's Material Design showcases a few nicely detailed animations that add life to the user interface. QML makes it straightforward to create the traditional moving, scaling and opacity change animations while taking advantage of the GPU, but how can we create an animation changing the shape of an element and not merely transforming it?

Today we'll see how we can use the QML Canvas item to create an animated simplified version of the Android's drawer and back arrow button.

We'll make sure that we use the GPU to accelerate the rendering and use standard QtQuick animations to control the drawing evolution, as conveniently as with traditional transform animations.

Since you can animate any property in QML, not only the built-in ones, you can define the animation parameters declaratively as properties and then use those as input for your JavaScript Canvas drawing code, requesting a repaint each time an input changes.

Drawing the base

Let's start with a static rendering of our drawer icon, drawing three horizontal bars:

import QtQuick 2.0
Canvas {
    id: canvas
    width: 256
    height: 256

    onPaint: {
        var ctx = getContext('2d')
        ctx.fillStyle = 'white'
        ctx.fillRect(0, 0, width, height)

        var left = width * 0.25
        var right = width * 0.75
        var vCenter = height * 0.5
        var vDelta = height / 6

        ctx.lineCap = "square"
        ctx.lineWidth = vDelta * 0.4
        ctx.strokeStyle = 'black'

        ctx.beginPath()
        ctx.moveTo(left, vCenter - vDelta)
        ctx.lineTo(right, vCenter - vDelta)
        ctx.moveTo(left, vCenter)
        ctx.lineTo(right, vCenter)
        ctx.moveTo(left, vCenter + vDelta)
        ctx.lineTo(right, vCenter + vDelta)
        ctx.stroke()
    }
}

Which gives us:

Using QML properties to drive the animation

Then let's add the animation logic for the rotation. Use a State, triggered when the arrowFormState boolean property is true, make our whole drawing rotate by 180 degrees in that state and specify how we want it to be animated:

    property bool arrowFormState: false
    function toggle() { arrowFormState = !arrowFormState }

    property real angle: 0
    states: State {
        when: arrowFormState
        PropertyChanges { angle: Math.PI; target: canvas }
    }
    transitions: Transition {
        NumberAnimation {
            property: "angle"
            easing.type: Easing.InOutCubic
            duration: 500
        }
    }

Each time one of our animated values change, tell the canvas to paint itself:

    onAngleChanged: requestPaint()

The Canvas is using the software rasterizer by default, this permits using functions like getImageData() without problems (which is slow if the pixels are lying in graphics memory). In our case however, we prefer having our drawing rendered as fast as possible to allow a smooth animation. Use the FramebufferObject renderTarget to use the OpenGL paint engine and the Cooperative renderStrategy to make sure that OpenGL calls are made in the QtQuick render thread:

    renderTarget: Canvas.FramebufferObject
    renderStrategy: Canvas.Cooperative

Finally simply use the animated value of our Canvas' angle QML property in our JavaScript drawing code:

    onPaint: {
        var ctx = getContext('2d')
        // The context keeps its state between paint calls, reset the transform
        ctx.resetTransform()

        // ...

        // Rotate from the center
        ctx.translate(width / 2, height / 2)
        ctx.rotate(angle)
        ctx.translate(-width / 2, -height / 2)

        // ...
    }

In practice we'll react to input events from a MouseArea, but for the sake of keeping the code simple in this demo we use a Timer to trigger a state change:

    Timer { repeat: true; running: true; onTriggered: toggle() }

And this is what we get:

Taking advantage of existing animations

Pretty, although it would be nicer if the rotation would always be clockwise. This is possible to do with a NumberAnimation, but QtQuick already provides this functionality in RotationAnimation, we can just tell it to update our custom angle property instead. Since QtQuick uses degrees, except for the Canvas API which requires radians, we'll convert to radians in our paint code:

    states: State {
        when: arrowFormState
        PropertyChanges { angle: 180; target: root }
    }
    transitions: Transition {
        RotationAnimation {
            property: "angle"
            direction: RotationAnimation.Clockwise
            easing.type: Easing.InOutCubic
            duration: 500
        }
    }
    onPaint: {
        // ...
        ctx.rotate(angle * Math.PI / 180)
        // ...
    }

This time it rotates clockwise for both transitions:

Change the shape based on animation parameters

Lastly we'll add the morphing logic. Create a new morphProgress property that we'll animate from 0.0 to 1.0 between the states, derive intermediate drawing local variables from that value and finally use them to animate the position of the line extremities between state changes. We could use a separate property for each animated parameter and let Qt animations do the interpolation, but this would spread the drawing logic around a bit more:

    property real morphProgress: 0
    states: State {
        // ...
        PropertyChanges { morphProgress: 1; target: canvas }
    }
    transitions: Transition {
        // ...
        NumberAnimation {
            property: "morphProgress"
            easing.type: Easing.InOutCubic
            duration: 500
        }
    }

    onMorphProgressChanged: requestPaint()

    onPaint: {
        // ...
        // Use our cubic-interpolated morphProgress to extract
        // other animation parameter values
        function interpolate(first, second, ratio) {
            return first + (second - first) * ratio;
        };
        var vArrowEndDelta = interpolate(vDelta, vDelta * 1.25, morphProgress)
        var vArrowTipDelta = interpolate(vDelta, 0, morphProgress)
        var arrowEndX = interpolate(left, right - vArrowEndDelta, morphProgress)

        ctx.lineCap = "square"
        ctx.lineWidth = vDelta * 0.4
        ctx.strokeStyle = 'black'
        var lineCapAdjustment = interpolate(0, ctx.lineWidth / 2, morphProgress)

        ctx.beginPath()
        ctx.moveTo(arrowEndX, vCenter - vArrowEndDelta)
        ctx.lineTo(right, vCenter - vArrowTipDelta)
        ctx.moveTo(left + lineCapAdjustment, vCenter)
        ctx.lineTo(right - lineCapAdjustment, vCenter)
        ctx.moveTo(arrowEndX, vCenter + vArrowEndDelta)
        ctx.lineTo(right, vCenter + vArrowTipDelta)
        ctx.stroke()
        // ...
    }

Which gives us our final result:

Wrapping it up

This is a simple example, but a more complex drawing will both be more difficult to maintain and risk hitting performance bottlenecks, which would defeat the purpose of the approach. For that reason it's important consider the limits of the technology while designing the UI.

Even though not as smooth or responsive, an AnimatedImage will sometimes be a more cost effective approach and require less coordination between the designer and the developer.

Performance and resources

Yes we're using the GPU, but the Canvas also has costs to consider:

  • Every Canvas item will allocate a QOpenGLFramebufferObject and hold a piece of graphics memory.
  • Each pixel will need to be rendered twice for each frame, once onto the framebuffer object and then from the FBO to the window. This can be an issue if many Canvas items are animating at the same time of if the Canvas is taking a large portion of the screen on lower-end hardware.
  • The OpenGL paint engine isn't a silver bullet and state changes on the Canvas' context should be avoided when not necessary. Since draw calls aren't batched together, issuing a high number of drawing commands can also add overhead and reduce OpenGL's ability of parallelizing the rendering.
  • Declarative animations are great, but since we are writing our rendering code in JavaScript we are losing a part of their advantage and must accept a small overhead caused by our imperative painting code.

This leads us to our next blog post, next week we'll see how we can reduce the overhead to almost nothing by using a much more resource effective QML item: the ShaderEffect. You can subscribe via RSS or e-mail to be notified.

Complete code

import QtQuick 2.0
Canvas {
    id: canvas
    width: 256
    height: 256

    property bool arrowFormState: false
    function toggle() { arrowFormState = !arrowFormState }

    property real angle: 0
    property real morphProgress: 0
    states: State {
        when: arrowFormState
        PropertyChanges { angle: 180; target: canvas }
        PropertyChanges { morphProgress: 1; target: canvas }
    }
    transitions: Transition {
        RotationAnimation {
            property: "angle"
            direction: RotationAnimation.Clockwise
            easing.type: Easing.InOutCubic
            duration: 500
        }
        NumberAnimation {
            property: "morphProgress"
            easing.type: Easing.InOutCubic
            duration: 500
        }
    }

    onAngleChanged: requestPaint()
    onMorphProgressChanged: requestPaint()

    renderTarget: Canvas.FramebufferObject
    renderStrategy: Canvas.Cooperative

    onPaint: {
        var ctx = getContext('2d')
        // The context keeps its state between paint calls, reset the transform
        ctx.resetTransform()

        ctx.fillStyle = 'white'
        ctx.fillRect(0, 0, width, height)

        // Rotate from the center
        ctx.translate(width / 2, height / 2)
        ctx.rotate(angle * Math.PI / 180)
        ctx.translate(-width / 2, -height / 2)

        var left = width * 0.25
        var right = width * 0.75
        var vCenter = height * 0.5
        var vDelta = height / 6

        // Use our cubic-interpolated morphProgress to extract
        // other animation parameter values
        function interpolate(first, second, ratio) {
            return first + (second - first) * ratio;
        };
        var vArrowEndDelta = interpolate(vDelta, vDelta * 1.25, morphProgress)
        var vArrowTipDelta = interpolate(vDelta, 0, morphProgress)
        var arrowEndX = interpolate(left, right - vArrowEndDelta, morphProgress)

        ctx.lineCap = "square"
        ctx.lineWidth = vDelta * 0.4
        ctx.strokeStyle = 'black'
        var lineCapAdjustment = interpolate(0, ctx.lineWidth / 2, morphProgress)

        ctx.beginPath()
        ctx.moveTo(arrowEndX, vCenter - vArrowEndDelta)
        ctx.lineTo(right, vCenter - vArrowTipDelta)
        ctx.moveTo(left + lineCapAdjustment, vCenter)
        ctx.lineTo(right - lineCapAdjustment, vCenter)
        ctx.moveTo(arrowEndX, vCenter + vArrowEndDelta)
        ctx.lineTo(right, vCenter + vArrowTipDelta)
        ctx.stroke()
    }
    Timer { repeat: true; running: true; onTriggered: toggle() }
}

GPU drawing using ShaderEffects in QtQuick

$
0
0

A ShaderEffect is a QML item that takes a GLSL shader program allowing applications to render using the GPU directly. Using only property values as input as with the Canvas in our previous article, we will show how a ShaderEffect can be used to generate a different kind visual content, with even better performances. We will also see how we can use the fluidity it provides in user interface designs, again taking Google's Material Design as a concrete example.

Quick introduction

The fragment (pixel) shader

This can be a difficult topic, but all you need to know for now is that correctly typed QML properties end up in your shader's uniform variables of the same name and that the default vertex shader will output (0, 0) into the qt_TexCoord0varying variable for the top-left corner and (1, 1) at the bottom-right. Since different values of the vertex shader outputs will be interpolated into the fragment shader program inputs, each fragment will receive a different qt_TexCoord0 value, ranging from (0, 0) to (1, 1). The fragment shader will rasterize our rectangular geometry by running once for every viewport pixel it intersects and the output value of gl_FragColor will then be blended onto the window according to its alpha value.

This article won't be talking about the vertex shader, the default one will do fine in our situation. I also encourage you to eventually read available tutorials out there about shaders and the OpenGL pipeline if you want to write your own.

A basic example

import QtQuick 2.0
ShaderEffect {
    width: 512; height: 128
    property color animatedColor
    SequentialAnimation on animatedColor {
        loops: Animation.Infinite
        ColorAnimation { from: "#0000ff"; to: "#00ffff"; duration: 500 }
        ColorAnimation { from: "#00ffff"; to: "#00ff00"; duration: 500 }
        ColorAnimation { from: "#00ff00"; to: "#00ffff"; duration: 500 }
        ColorAnimation { from: "#00ffff"; to: "#0000ff"; duration: 500 }
    }

    blending: false
    fragmentShader: "
        varying mediump vec2 qt_TexCoord0;
        uniform lowp float qt_Opacity;
        uniform lowp vec4 animatedColor;

        void main() {
            // Set the RGBA channels of animatedColor as our fragment output
            gl_FragColor = animatedColor * qt_Opacity;

            // qt_TexCoord0 is (0, 0) at the top-left corner, (1, 1) at the
            // bottom-right, and interpolated for pixels in-between.
            if (qt_TexCoord0.x < 0.25) {
                // Set the green channel to 0.0, only for the left 25% of the item
                gl_FragColor.g = 0.0;
            }
        }
    "
}

This animates an animatedColor property through a regular QML animation. Any change to that property, through an animation or not, will automatically trigger an update of the ShaderEffect. Our fragment shader code then directly sets that color in its gl_FragColor output, for all fragments. To show something slightly more evolved than a plain rectangle, we clear the green component of some fragments based on their x position within the rectangle, leaving only the blue component to be animated in that area.

Parallel processing and reduced shared states

One of the reasons that graphics hardware can offer so much rendering power is that it offers no way to share or accumulate states between individual fragment draws. Uniform values are shared between all triangles included in a GL draw call. Every per-fragment state first has to go through the vertex shader.

In the case of the ShaderEffect, this means that we are limited to qt_TexCoord0 to differentiate pixels. The drawing logic can only be based on that input using mathematic formulas or texture sampling of an Image or a ShaderEffectSource.

Using it for something useful

Even though this sounds like trying to render something on a graphing calculator, some can achieve incredibly good looking effects with those limited inputs. Have a look at Shadertoy to see what others are doing with equivalent APIs within WebGL.

Design and implementation

Knowing what we can do with it allows us to figure out ways of using this in GUIs to give smooth and responsive feedback to user interactions. Using Android's Material Design as a great example, let's try to implement a variant of their touch feedback visual effect.

This is how the implementation looks like. The rendering is more complicated but the concepts is essentially similar to the simple example above. The fragment shader will first set the fragment to the hard-coded backgroundColor, calculate if the current fragment is within our moving circle according to the normTouchPos and animated spread uniforms and finally apply the ShaderEffect's opacity through the built-in qt_Opacity uniform:

import QtQuick 2.2
ShaderEffect {
    id: shaderEffect
    width: 512; height: 128

    // Properties that will get bound to a uniform with the same name in the shader
    property color backgroundColor: "#10000000"
    property color spreadColor: "#20000000"
    property point normTouchPos
    property real widthToHeightRatio: height / width
    // Our animated uniform property
    property real spread: 0
    opacity: 0

    ParallelAnimation {
        id: touchStartAnimation
        UniformAnimator {
            uniform: "spread"; target: shaderEffect
            from: 0; to: 1
            duration: 1000; easing.type: Easing.InQuad
        }
        OpacityAnimator {
            target: shaderEffect
            from: 0; to: 1
            duration: 50; easing.type: Easing.InQuad
        }
    }

    ParallelAnimation {
        id: touchEndAnimation
        UniformAnimator {
            uniform: "spread"; target: shaderEffect
            from: spread; to: 1
            duration: 1000; easing.type: Easing.OutQuad
        }
        OpacityAnimator {
            target: shaderEffect
            from: 1; to: 0
            duration: 1000; easing.type: Easing.OutQuad
        }
    }

    fragmentShader: "
        varying mediump vec2 qt_TexCoord0;
        uniform lowp float qt_Opacity;
        uniform lowp vec4 backgroundColor;
        uniform lowp vec4 spreadColor;
        uniform mediump vec2 normTouchPos;
        uniform mediump float widthToHeightRatio;
        uniform mediump float spread;

        void main() {
            // Pin the touched position of the circle by moving the center as
            // the radius grows. Both left and right ends of the circle should
            // touch the item edges simultaneously.
            mediump float radius = (0.5 + abs(0.5 - normTouchPos.x)) * 1.0 * spread;
            mediump vec2 circleCenter =
                normTouchPos + (vec2(0.5) - normTouchPos) * radius * 2.0;

            // Calculate everything according to the x-axis assuming that
            // the overlay is horizontal or square. Keep the aspect for the
            // y-axis since we're dealing with 0..1 coordinates.
            mediump float circleX = (qt_TexCoord0.x - circleCenter.x);
            mediump float circleY = (qt_TexCoord0.y - circleCenter.y) * widthToHeightRatio;

            // Use step to apply the color only if x2*y2 < r2.
            lowp vec4 tapOverlay =
                spreadColor * step(circleX*circleX + circleY*circleY, radius*radius);
            gl_FragColor = (backgroundColor + tapOverlay) * qt_Opacity;
        }
    "

    function touchStart(x, y) {
        normTouchPos = Qt.point(x / width, y / height)
        touchEndAnimation.stop()
        touchStartAnimation.start()
    }
    function touchEnd() {
        touchStartAnimation.stop()
        touchEndAnimation.start()
    }

    // For this demo's purpose, in practice we'll use a MouseArea
    Timer { id: touchEndTimer; interval: 125; onTriggered: touchEnd() }
    Timer {
        running: true; repeat: true
        onTriggered: {
            touchStart(width*0.8, height*0.66)
            touchEndTimer.start()
        }
    }
}

Explicit animation control through start() and stop()

One particularity is that we are controlling Animations manually on input events instead of using states. This gives us more flexibility in order to stop animations immediately when changing states.

The mighty Animators

Some might have noticed the use of UniformAnimator and OpacityAnimator instead of a general NumberAnimation. The major difference between Animator and PropertyAnimation derived types is that animators won't report intermediate property values to QML, only once the animation is over.

Property bindings or long IO operations on the main thread won't be able to get in the way of the render thread to compute the next frame of the animation.

When using ShaderEffects, a UniformAnimator will provide the quickest rendering loop you can get. Once your declaratively prepared animation is initialized by the main thread and sent over to the QtQuick render thread to be processed, the render thread will take care of computing the next animation value in C++ and trigger an update of the scene, telling the GPU to use that new value through OpenGL.

Apart from the possibility of a few delayed animation frames caused by the thread synchronization, Animators will take the same input and behave just like other Animations.

Resource costs and performance

ShaderEffects are often depicted with their resource hungry brother, the ShaderEffectSource, but when a ShaderEffect is used alone to generate visual content like we're doing here, it has very little overhead. Unlike the Canvas, ShaderEffect instances also don't each own an expensive framebuffer object. It can be instantiated in higher quantities without having to worry about their cost. All instance of a QML Component having the same shader source string will use the same shader program. All instances sharing the same uniform values will usually be batched in the same draw call. Otherwise the cost of a ShaderEffect instance is the little memory used by its vertices and the processing that they require on the GPU. The complexity of the shader itself is the bottleneck that you might hit.

Selectively enable blending

Blending requires extra work from the GPU, prevents batching of overlapping items. It also means that the GPU needs to render the fragments of the Items hidden behind, which it could otherwise just ignore using depth testing.

It is enabled by default to make it work out of the box and it's up to you to disable it if you know that your shader will always output fully opaque colors. Note that a qt_Opacity < 1.0will trigger blending automatically, regardless of this property. The simple example above disables it but our translucent touch feedback effect needs to leave it enabled.

Should I use it?

The ShaderEffect is simple and efficient, but in practice you might find that it's not always possible to do what you want with the default mesh and limited API available through QML.

Also note that a using ShaderEffects requires OpenGL. Mesa llvmpipe supports them, an OpenGL ES2 shader will ensure compatibility with ANGLE on Windows, but you will need fallback QML code if you want to deploy your application with the QtQuick 2D Renderer.

If you need that kind of performance you might already want to go a step further, subclass QQuickItem and use your shader program directly through the public scene graph API. It will involve writing more C++ boilerplate code, but in return you get direct access to parts of the OpenGL API. However, even with that goal in mind, the ShaderEffect will initially allow you to write a shader prototype in no time, giving you the possibility to reuse the shader if you need a more sophisticated wrapper later on.

Try it out

Those animated GIFs aren't anywhere near 60 FPS, so feel free to copy this code into a qml file (or clone this repository) and load it in qmlscene if you would like to experience it properly. Let us know what you think.

New in Qt 5.5: Q_ENUM and the C++ tricks behind it

$
0
0

Qt 5.5 was just released and with it comes a new Q_ENUM macro, a better alternative to the now deprecated Q_ENUMS (with S).

In this blog post, I will discuss this new Qt 5.5 feature; What it does, and how I implemented it. If you are not interested by the implementation details, skip to the conclusion to see what you can do in Qt 5.5 with Q_ENUM.

The problem

In order to better understand the problem it solves, let us look at this typical sample code using Q_ENUMS as it already could have been written with Qt 4.0.

classFooBar : publicQObject {Q_OBJECTQ_ENUMS(Action)public:enumAction { Open, Save, New, Copy, Cut, Paste, Undo, Redo, Delete };voidmyFunction(Actiona) {qDebug() <<"Action is: "<<a;//...
  }
};

But here, the qDebug will look like this:
Action is: 8
It would be much better if I could see the text instead such as:
Action is: Delete

Q_ENUMS tells moc to register the names of the enum value inside its QMetaObject so that it can be used from Qt Designer, from QtScript or from QML. However it is not working yet with qDebug.

One could use the information in the QMetaObject while overloading the operator<< for QDebug and use QMetaObject's API:

QDebugoperator<<(QDebugdbg, FooBar::Actionaction)
{staticintenumIdx = FooBar::staticMetaObject.indexOfEnumerator("Action");returndbg<<FooBar::staticMetaObject.enumerator(enumIdx).valueToKey(action);
}

That has been working fine since Qt 4.0, but you have to manually write this operator and it is a lot of code that is somehow error prone. Most of Qt's own enumerations did not even have such operator.

The Solution

I wanted this to be automatic. The problem is that we had no way to get the QMetaObject of the enclosed QObject (or Q_GADGET) associated with a given enumeration. We also need the name of the enumeration to be passed as an argument to QMetaObject::indexOfEnumerator.
Let us suppose we have some magic functions that would do exactly that. (We will see later how to make them):

QMetaObject *qt_getEnumMetaObject(ENUM);
const char *qt_getEnumName(ENUM);

We could then do:

template<typename T>QDebugoperator<<(QDebugdbg, T enumValue)
{constQMetaObject *mo = qt_getEnumMetaObject(enumValue);intenumIdx = mo->indexOfEnumerator(qt_getEnumName(enumValue));returndbg<< mo->enumerator(enumIdx).valueToKey(enumValue);
}

Argument dependent lookup (ADL) will find the right overload forqt_getEnumMetaObject and qt_getEnumName, and this function will work. The problem is that this template will match any type, even the ones that are not enumerations or that are not registered with Q_ENUM for which qt_getEnumMetaObject(enum) would not compile. We have to use SFINAE (substitution failure is not an error) to enable this operator only if qt_getEnumMetaObject(enum) compiles:

template<typename T>typenameQtPrivate::QEnableIf<QtPrivate::IsQEnumHelper<T>::Value , QDebug>::Typeoperator<<(QDebugdbg, T enumValue)
{constQMetaObject *mo = qt_getEnumMetaObject(enumValue);intenumIdx = mo->indexOfEnumerator(qt_getEnumName(enumValue));returndbg<< mo->enumerator(enumIdx).valueToKey(enumValue);
}

QEnableIf is the same as std::enable_if and IsQEnumHelper is implemented this way:

namespaceQtPrivate {template<typename T> charqt_getEnumMetaObject(const T&);template<typename T>structIsQEnumHelper {staticconst T &declval();// If the type was declared with Q_ENUM, the friend qt_getEnumMetaObject()  // declared in the Q_ENUM macro will be chosen by ADL, and the return type  // will be QMetaObject*.  // Otherwise the chosen overload will be the catch all template function  // qt_getEnumMetaObject(T) which returns 'char'enum {Value = sizeof(qt_getEnumMetaObject(declval())) == sizeof(QMetaObject*)
  };
};
}

So now it all boils down to how to implement the Q_ENUM macro to declare this qt_getEnumMetaObject.
We need to implement the function qt_getEnumMetaObject in the same namespace as the class. Yet, the macro is used within the class. How can we implement the function in the class? Perhaps using some static function or some template magic? No! We are going to use a friend function. Indeed, it is possible to define a function in a friend declaration. As an illustration:

namespaceABC {classFooBar {friendintfoo() { return456; }
  };
}

foo is in the namespace ABC (or the global namespace if FooBar was not in a namespace). But the interesting fact is that in the body of that function, the lookup is done within the class's scope:

classFooBar {friendconstQMetaObject *getFooBarMetaObject() { return&staticMetaObject; }staticconstQMetaObjectstaticMetaObject;
};

This uses the staticMetaObject of the class (as declared in the Q_OBJECT macro). The function can just be called by getFooBarMetaObject(); (without the FooBar:: that would be required if it was a static function instead of a friend).
With that we can now construct the Q_ENUM macro:

#define Q_ENUM(ENUM) \
    friend constexpr const QMetaObject *qt_getEnumMetaObject(ENUM) noexcept { return &staticMetaObject; } \
    friend constexpr const char *qt_getEnumName(ENUM) noexcept { return #ENUM; }

Each instance of this macro will create a new overload of the functions for the given enum type. However, this needs the ENUM type to be declared when we declare the function. Therefore we need to put the Q_ENUM macro after the enum declaration. This also permits only one enum per macro while Q_ENUMS could have several.

(moc will still interpret the Q_ENUM macro like the old Q_ENUMS macro and generate the same data.)

Using this, I also introduced a new static function QMetaEnum::fromType<T>() which let you easily get a QMetaEnum for a given type. This is how it is implemented:

template<typename T>QMetaEnumQMetaEnum::fromType()
{constQMetaObject *metaObject = qt_getEnumMetaObject(T());constchar *name = qt_getEnumName(T());returnmetaObject->enumerator(metaObject->indexOfEnumerator(name));
}

We can also integrate it with QMetaType to register this type automatically and register the correspding meta object to the metatype system. From that, we can use this information in QVariant to convert from a string or to a string.

(Note: The code snippets shown were slightly simplified for the purpose of the blog. Check the real implementation of the debug operator<<, or QMetaEnum::fromType, or QTest::toString.

Conclusion

Q_ENUM is like the old Q_ENUMS but with those differences:

  • It needs to be placed after the enum in the source code.
  • Only one enum can be put in the macro.
  • It enables QMetaEnum::fromType<T>().
  • These enums are automatically declared as a QMetaTypes (no need to add them in Q_DECLARE_METATYPE anymore).
  • enums passed to qDebug will print the name of the value rather than the number.
  • When put in a QVariant, toString gives the value name.
  • The value name is printed by QCOMPARE (from Qt 5.6).

You can read more articles about Qt internals on our blog.

We are going to Qt Developer Days 2011

Qt Developer Days 2011 in San Francisco

$
0
0

In one week, we will also be at the Qt Developer Days in San Francisco. Let's see how those compare to the Devdays in Munich. Our talks will be the same as in Munich.

This means Markus will talk about the Qt network stack and how you can use it TCP and UDP sockets and TCP servers. He will also be taking a quick look at QML's XMLHttpRequest.

Olivier will talk about multithreading (using QtConcurrent) and give an introduction to lock-free programming (using QAtomicInteger/QAtomicPointer) before going though the internals of QMutex in Qt5.

If you want to meet us there, feel free to send an e-mail.

Update: You can now see our talks online.

Viewing all 76 articles
Browse latest View live