Mobile back-ends and the big mess

If you’ve ever wanted to make some Android or IOS or any other mobile platform complex application then you’ve probably got some headache wondering which back-end platform to choose.

Google Firebase, Amazon WS, Microsoft Azure and some other minor SaaS vendors are likely to be compared to each other in order to make such an important strategic decission.

As a developer, I’ve worked with Amazon WS in one project and I have just some experience using Firebase withing android apps, both of them have very good documentation, excelent support, more features that you would ever need and more or less easy to understand terms and conditions. As you can see, the weak point of these services are the terms and conditions, but also the pricing. Yes, everything is easy and nice and colorful until you get to the nasty part of the deal, pricing and even more important, pricing modifications in the future, that is, terms and conditions.

I wanted to point out that it doesnt matter at all that the pricing policy is clear enought if the terms and conditions aren’t. Consider for example the next Azure pricing estimate:

Sin título

…not easy to understand eh? Do you even know how could you estimate the data retrieval number of operations for your app as it grows? You simply can’t do it because:

  1. Your app’s user base will grow not uniformly
  2. Your income could not be growing linearly with the users number (which is most likely to happen specially in the begining of any app when you may not even have any monetization system at all)
  3. Even if your app won’t change how it works internally and uses the SaaS services, the users could change they way they use it, somehow increasing your needs or your monthly bill.
  4. Your app code is going to change, maybe you will access your back-end another way, maybe more frecuently, maybe doing more “data retrieval” and less “data write” or maybe increasing the “read operations” and decreasing the capacity needed.

If this isn’t a mess tell me please what would be a mess definition. And it gets worse…as this is just the options for the storage service!

On the other hand you can consider the next pricing model from Firebase:

Sin título

…seems quite easier right?

How can be possible to exist such huge difference between pricing policy specs from one provider to another, given both of them are so similar regardig to capabilities?

This will blow your mind, but indeed it’s much easier to understand the pricing policy from Azure than the one from Firebase, and the reason is in the terms and conditions. The Firebase pricing complexity is just hidden, but still pressent. Consider for example downloading from Firebase Realtime Database one BIT of info,  every minute, by a hundred thousand of clients in a full month. That would be 100.000*1*1440*30/8*1024 = 4GB of data transfered from your back-end. That’s not much isn’t it?. That will cost you $25 a month. But now, consider that Firebase terms and conditions permit to charge you for the full ammount of data transfered from the database. To transfer a bit from your database you will need to get some kind of json data like { “value”:”1″ }. Sudenly you are not being charged for getting that single boolean (bit) value, but for getting a response with 20 bytes length. That’s 20*8 times more, 160gb. That wont fit into your “cheap” plan. But go further and consider that all the overhead of a full https response is included in the chargeable costs. That may get you to incredibly high costs.

It’s ok, you just have to KNOW that in advance right? but what if they CHANGE the rules in the middle of the mach? Then you can go bankrupt, you app will be instantly dead and your users will go away. Impossible? check out this story about a cost increase of 7000%.

Things can be even worse, as one particular vendor may just get out of business (hence again the importance of the terms and conditions), you can just ask what happened to the folks that were using the “” (owned by Facebook) cloud services for their apps and they just shut it down.

Probably you are now figuring out why the “big mess” in the title to talk about back-end SaaS solutions. And probably you are now convinced that it will be a HUGE MISTAKE to choose an specific vendor wich will have you and your core business kidnapped, as you won’t be able to go from one SaaS provider specific solutions to another one without mayor costs.

So, it’s always advisable to stay away from commercial SaaS providers and to make your app or business independant on one particular vendor. I can only imagine 3 ways of doing this:

  1. Designing and implementing your own backend, say using REST services. You won’t have libraries specific to the clients like Android, IOS, Javascript, and so on but you can always just use bare REST calls. This is the most flexible and independent approach as you will only need a server to deploy your backend, and if one vendor gets crazy with the pricing or just go out of business you can just walk to the next door and deploy your backend without any issue.
  2. Designing your client code to easily switch from one SaaS provider to another. Probably you can easily wrap your code with an interface wich may support couple of vendors, making easy to update your frontend to adapt to an eventual backend technology change. The sad part is that you won’t be using the awesome, confortable, and ultra-fast solutions that those providers like Firebase gives you to build your frontend.
  3. Using some open source backend alternative. You won’t have to design your own backend but your backend will be able to travel from one provider to another if it’s just some server code.
    And AFAIK the best alternative if you choose this approach would be wich is indeed based on the software from the shut down SaaS by Facebook. Obviously it’s not even close in capabilities to the commercial solutions for the moment, but it’s better than nothing. And it’s alredy dockified.
    (Please let me know if you’ve heard about another open source back-end similar solution)

Personally I will be giving a try to parseplatform for my next toy-project (maybe using platform which offers freemium parse SaaS) and I expect to make some coparison to Firebase in regards to Android usage.


Big fact tables, fat problems and partitioning

Sometimes in a data warehouse you just started working with, you suddenly realize you got a big problem:huge table

Yep, that’s a table with almost one thousand million records on it. And the problem is that it’s a fact table indeed, so you will find yourself executing queries in that giant table, apart from the usual ETL process to feed up smaller ones, wich is in his own, quite a big problem.

Obviously, it doesn’t matter how good the indexes are or how well optimized the queries can be, a query is going to take it’s time when you got so many records.

Normally I don’t see such big tables in any data warehouse project, and my first instinct when I had to deal with this was to try to reduce the size itself somehow. To try and do things another way. But in this particular case it wasn’t possible as all the records needed to be accesible. Here is when partitioning comes to save your day.

Partitioning is a well known database engine feature wich is supported by almost all the big databases. When you got big data and you can’t go nosql, then you need partitioning. As you can imagine it’s basically to put the data in several “partitions” wich in fact are internally treated as different tables. So if you have for example a huge sales table you could “partition” the table by country. This is the same as if you had a dozen different tables for sales one for each country, with the obvious advantage of not having to create and/or maintain those tables anytime you need to insert data for a different country.

This would be the strategy, just partition that big fat table in some “logical” way, but there are still a lot of hidden issues once you go this way, just to mention some of the ones we had to deal with in the last project:

  1. Partitioning is not standard, every database has it’s own syntax. This wouldn’t be a problem if you are selling a service, as you would choose Oracle, MySQL or Postgress for your backend, but if you are selling a product, that is, a software solution to be implemented by many vendors, where many of them could have some hardware and software solutions deployed supporting just an specific database vendor…
  2. Even more, some databases support the so-called subpartitions and some others don’t. This may force you to choose not to go the subpartition way in many cases.
    Think again that huge table, you could have several partitions, one for each country (id_country) and for each country, you could have several subpartitions, one for each month or year (what’s called partition by range). This is for many reasons very convenient but may be a bad design solution if the database engine to be used is not fixed from the beginning.
  3. The number of subpartitions, the columns or the method used to partition (the value in a column, the date range in another column, even hash partitions) cannot be made dynamic. Once you choose your partition strategy for a table, it’s fixed, and you can eventually get to a point where you have huge ammount of data in some subpartition (for example the subpartition for the year 2017 and the country France) and very low ammount of data in the subpartition for the year 2017 and the country Spain. This is very unconvenient, as the point of making partitions is to not have too many records to search into any time you make a query.
  4. Once you got a record in one partition, you just cannot “update” the partitioned field, well, you can as long as the record falls into the same partition. This has to be taken into account during the design phase. Just in case.
  5. If you choose a partitioning by value (country) you will be “tied” to it and you probably end up having very heterogeneous partition sizes, think about using another more general field like for example “area” wich you can asociate to a country afterwards. You can create such areas  to balance the partition sizes afterwards more easily.
  6. When accesing such big tables, every query can be optimized specifying the partition or partitions in wich it has to be executed. This can vary from engine to engine as for example Oracle doesn’t admit specifying several partitions but MySQL admits it. And anytime you have to optimize the queries you will find another part of the system is tied tightly to the partitioning you have chosen.
  7. Sometimes the update and delete queries can cause problems when there are so many records. If I were you I’d be doing some stress tests to prove the system can handle the rollbacks and logs required for such large data. You would be surprised If I told you.
  8. Any indexes you had should be revised, as you will find some of the indexes need to be partitioned, some of them are not good anymore and maybe you will find a new index needed to get the speed you need.
  9. How many partitions can you have? is it costly to add new partitions in a partitioned table? those are questions rarely asked, but they are very relevant in this particular case, as if you had 5000 partitions you would be still dealing with partitions with as many records as 250.000. That’s still a lot of data. You could be tempted to just increase the number of partitions and/or subpartitions, but you would soon find that the cost of operating with partitions is quite big when you already have several thousands of them.
    In fact, we found that any ammount of partitions in a partitioned table over that 5000 ammount, at least in Oracle, leds you to a very costly maintenance operations. Any time you create new partitions or delete old ones, you got a “fine” of several seconds. Probably the engine has to check all the partitions and subpartitions definition in order to create a new one. Tough for the physical drive.

In conclussion, partitioning is the best possible way to deal with huge ammounts of sql data but it can be problematic to implement and it’s not easy to chose the correct strategy.

I’d like to hear your own stories with partitioning 🙂

Abusing Java 8 lambda functions

Yep, lambda functions are nice, beautiful and a very sort way to write code. Almost always.

But sometimes they can become a hell to manage, in particular, whenever you need them to engage with more complex function calls wich may throw exceptions. The reason behind can be difficult to understand to newcomers to Java, but it’s just because the lambda expression is just a syntax thing, a shortcut notation for a more complex thing -an anonymous class extending an interface without support for throwing any exception- wich it will be compiled to.

So, it can be very frustating that, when you try to do the next, the code won’t even compile because there’s an exception that may be thrown, no matter where you try to put a try-catch block:

arrayListOfObjects.forEach(cond -> {

You can off course change your code with a more standard, old-school for loop, but there is a way to avoid having to do so, and it’s quite easy indeed.

So, we will expose some classes you have to add to your code in order to have this final result:

arrayListOfObjects.forEach(throwingConsumerWrapper(cond -> {
   // May launch some SQL Exception
   // Won't compile without the throwingConsumerWrapper

As you can see it’s quite easy to write and understand, the only problem being the exception that could be thrown won’t be processed properly. You have to solve that by yourself. And it’s quite simple to understand too, as just another call to a throwingConsumerWrapper is needed.

Let’s see now how this magic works, the classes you have to declare in order for this to work properly: 

    public interface ThrowingConsumer<T, E extends Exception> {
        void accept(T t) throws E;

    static <T> Consumer<T> throwingConsumerWrapper(
            ThrowingConsumer<T, Exception> throwingConsumer) {

        return i -> {
            try {
            } catch (Exception ex) {
                throw new RuntimeException(ex);

…you can just copy this code and you are done. But let’s see how this works internally.

The first thing would be to take a look to the foreach method implemented by the ArrayList class:

    public void forEach(Consumer<? super E> action) {
        final int expectedModCount = modCount;
        final E[] elementData = (E[]) this.elementData;
        final int size = this.size;
        for (int i=0; modCount == expectedModCount && i < size; i++) {
        if (modCount != expectedModCount) {
            throw new ConcurrentModificationException();


As you can see…it just accepts a Consumer, kind of functional interface with just an accept method to be called.

Then take a look to our classed, first, we have a “Functional interface” ThrowingConsumer, wich declares just a method wich accept a T class object and throws E class object. The method is just accept. It’s meant to replace the java.util.function.Consumer class, from the Java 8 API, wich has an identical interface but without the throwing part.

Next we declare an static class instance of Consumer type named throwingConsumerWrapper, wich will be indeed an implementation not of the Consumer class but of the newly created ThrowingConsumer. It will be this implementation the one that will support the exception management, and as you can see, we are only taking care of a general exception, but we can rewrite the method to be more accurate if we need to.

And…yes,…the syntax is kind of crazy and it’s quite hard to understand what’s happening until you read the code more slowly.

For those of you who are wandering if this solves every problem you can possible have with using -or abusing- lambda functions when you want to iterate in a collection object, it doesn’t. For example if you have a Map<T,S> instead of an ArrayList…the consumer needs to accept two parameters :).

Hope you find this useful!

The dead of DotNetNuke

It was arround 2007 when DNN got my attention for the first time. I needed a CMS system which was not based in PHP since I really didn’t want to be involved in another technology and I was somehow kind of familiar with c# and tech. At that time DNN was, in my opinion, far better in every aspect than any other free CMS available, being the weakest point the lack of free modules (although there were a lot of awesome modules like forum or blog ones, but when you needed for example a photo gallery module, you would only find one free option instead of a bunch of them) and the strongest points the architecture, the user experience, the easy to develop skins and so on.

Custom skin for a sports club DNN website

It was a really good free CMS, well build, easily customizable and expandable… I built some websites using this CMS, one for a local small soccer club wich isn’t online anymore and another one for a local rc sports club wich I plan to shut down soon and replace for another CMS. I built some other small websites but mainly those two were the big ones.

So, if it was so good, why is it dying?

In my opinion, they wanted to make money from it way too early.  As the Dnn CMS was just another contender to the more extended and popular PHP CMS’s, they started to serve an enterprise version and a community version. They somehow stopped developing modules or maybe the people who developed the first modules stopped working on it. In version 4 to 7 of the CMS you would find the exact same free modules, technology and a dropping community support.

Don’t get me wrong, it’s good they make as much money as they can, but monetization has to be done at the proper time.

So in the need of upgrading a website using DNN from a very old version to a newer one, I find discouraging how little they improved the CMS in so many years. Dnn is very powerful but you need modules to leverage it, without modules you get only a fatter and slower software that need a more expensive hosting service. It’s even worse because some new contenders have arrived since then, now you have bigger and more numerous contenders, most of them just focus on one aspect, being shops or blogs. They offer huge variety of modules for their respective targets and they do their job excelent.


Recently, less than a year ago, ESW Capital acquired DNN Corp, so, considering Dnn is practically dead right now, I’m really looking forward to ESW to make some big investment into the DNN CMS to make it shine again and return it from the grave where it has been resting for the last years.

In app purchases for Android with Unreal Engine 4.16 and Blueprints

You know, once you’ve almost finished a nice little game with such amazing engine as Unreal Engine, you have to deal with all the nasty and unconfortable details of publishing and monetizing it.

But, is Unreal Engine ready to make that final step as easy as developing the game itself?…well not that much.

First of all, this apply for version 4.16.3, but I think this hasn’t changed in the current beta version 4.19 and after all the searches I’ve done It doesn’t look like its gonna be fixed soon, but if your engine version is far newer than this, maybe this doesn’t apply for you.

When you have to monetize your game, you want ads and you want in app purchases. Its a posibility to have a free limited version and a paid full version instead, in wich case you can just use google play store and have 2 separate versions published and everything will be easier. But it looks like you will get more revenue with a free version that display some kind of ads and offer some in app purchases.

For my experiment and first UE game “Baby Wooden Games” I’ve decided to go with a free version with ads and offer just one IAP (in app purchase) product, the one that removes ads completely from the game; This would be a “non-consumable” in app purchase.

Now, you have to solve several case-uses:

  1. When the user don’t own the product, you have to show ads
  2. When the user buys the IAP, you must not show ads.
  3. You have to deal with the purchase workflow in order to let the user buy the products.
  4. If the user owns the product and makes a new installation in a new device, you have to detect that the product was already bought and just don’t show any ads.
  5. If the user returns the product (an Android user can get a return some purchases in a period of time if he is not happy with them), then you have to detect it and show ads again in every device he had the product installed on.
  6. What if the user loads your game and hasn’t got internet connection? Some ads get cached into devices and can be shown without connection, but, if you can’t check online if the product was bought, how to know wether to show ads or not?.

You will have even much more case-uses if you have more IAP products with some of them consumables, but let’s stick with this example.

The problem:

UE4 provides several BP functions to deal with IAP and to solve ALMOST all your needs :


And I can tell you all of them work correctly, but also that there’s ONE MISSING!

By the way the ones easier to use are the second ones, the ones that don’t start with “f”, and they can only be used in the main graph, they don’t work inside functions, the “f” functions have to be created with ad-hoc events wich can be difficult to use for some people.

  1. Make an In-App Purchase:
    This works, and allows the user to make an in app purchase.
  2. Restore in app purchase:
    I haven’t tested it but afaik it works correctly.
  3. Read In app purchase information:
    This one is the main reason of this post and of having lost several hours figuring how to finish all my IAP workflows.
    This function returns information on the AVAILABLE IAP products, it doesn’t return info on the ALREADY owned IAP products!!!So, if you have several IAP that can be created dinamically in the APP Store and must be offered dinamically to your user, this is what you need. But, what about the already purchased IAPS? how can I know if a “no-ads” product is owned by the user?
  4. The missing one: One to read the currently owned in app purchase products. And you need it. You can’t even publish a game in several stores if you can’t detect wether the user owns a product in a new installation. This is the reason for this post and …don’t worry! we have figured out couple of workarrounds and goona share with you!

The workarround, not that good one:

So, if a user buys the product, and you store a boolean value in preferences, then it doesn’t matter you can’t check online wether the user bought it, right?

Well, it matters a lot because if the user makes a new installation, you have no way to know if you must show ads or not. In fact, you have one way, if the user tries to buy the product again, you will receive a “fail” from the MakeInAppPurchase method with the message “Already Owned”, wich will allow you to store the correct boolean preference in the new device.

But even it’s working, this it’s not acceptable to me as it’s unintuitive and can annoy the users, and that’s why I keeped searching.

The workarround, third parties plugin:

There is at least one plugin to deal with IAP, from SDKBOX company, and it’s supposed to work, although it won’t be easy, at least in 4.16.3 engine as it won’t compile right off the box. You have too google a lot to solve the compilation issue, edit some .cs files, and voila, you have the plugin installed…but the documentation isn’t enought and you will see game crashes. In top of that you will have to convert your pure BP project into a C++ one and install Visual Studio… not an easy way to follow, that’s why I discarded it after some research.

The workarround, easy but hacky one:

I decided to see the java part of the engine wich is meant to deal with the google app store IAPs, the file is, whose methods are called by the previous BP methods using a C++  wrapper.

If you examine that file you will find that it has methods to deal with our issue:


So, if there are a method in Java to recover the already owned IAPs, why can’t we get them from UE BP?

The answer can be embarrasing, cos it looks to me somebody messed up and/or forgot to implement the C++ Wrapper or the BP Function. Anyway, the method wich is called by the BP funcion in that java file is QueryInAppPurchases, wich returns just the details from the IAP’s like name, description, price, and so on, but wich won’t return the bought status, or receipt or transaction code.

So, what if that method would return not the name of the products but the name followed by and optional a token like “[OWNED]”?. That would solve our issue, we would just need to check the names returned by the BP ReadInAppPurchaseInformation in search for that token!.

It’s not very hard to do as you already have all the needed code in that java file.

	 * Query product details for the provided skus
	public boolean QueryInAppPurchases(String[] InProductIDs)
		Log.debug("[GooglePlayStoreHelper] - GooglePlayStoreHelper::QueryInAppPurchases");
		ArrayList<String> skuList = new ArrayList<String> ();
		for (String productId : InProductIDs)
			Log.debug("[GooglePlayStoreHelper] - GooglePlayStoreHelper::QueryInAppPurchases - Querying " + productId);

		Bundle querySkus = new Bundle();
		querySkus.putStringArrayList(GET_SKU_DETAILS_ITEM_LIST, skuList);
// -----------
		ArrayList<String> ownedSkus = new ArrayList<String>();
		try {
			ArrayList<String> purchaseDataList = new ArrayList<String>();
			ArrayList<String> signatureList = new ArrayList<String>();
			int responseCode = GatherOwnedPurchaseData(ownedSkus, purchaseDataList, signatureList, null);
			if (responseCode == BILLING_RESPONSE_RESULT_OK)
				Log.debug("[GooglePlayStoreHelper] - AP GooglePlayStoreHelper::QueryExistingPurchases - User has previously purchased " + ownedSkus.size() + " inapp products" );

				ArrayList<String> productTokens = new ArrayList<String>();
				ArrayList<String> receipts = new ArrayList<String>();

				for (int Idx = 0; Idx < ownedSkus.size(); Idx++)
					String purchaseData = purchaseDataList.get(Idx);
					String dataSignature = signatureList.get(Idx);

						Purchase purchase = new Purchase(ITEM_TYPE_INAPP, purchaseData, dataSignature);
						String token = purchase.getToken();
						String receipt = Base64.encode(purchase.getOriginalJson().getBytes());
					catch (JSONException e)
						Log.debug("[GooglePlayStoreHelper] - AP GooglePlayStoreHelper::QueryExistingPurchases - Failed to parse receipt! " + e.getMessage());
				Log.debug("[GooglePlayStoreHelper] - AP GooglePlayStoreHelper::QueryExistingPurchases - Success!");
		} catch (Exception ex) {
// -----------
			Bundle skuDetails = mService.getSkuDetails(3, gameActivity.getPackageName(), ITEM_TYPE_INAPP, querySkus);

			int response = skuDetails.getInt(RESPONSE_CODE);
			Log.debug("[GooglePlayStoreHelper] - GooglePlayStoreHelper::QueryInAppPurchases - Response " + response + " Bundle:" + skuDetails.toString());
			if (response == BILLING_RESPONSE_RESULT_OK)
				ArrayList<String> productIds = new ArrayList<String>();
				ArrayList<String> titles = new ArrayList<String>();
				ArrayList<String> descriptions = new ArrayList<String>();
				ArrayList<String> prices = new ArrayList<String>();
				ArrayList<Float> pricesRaw = new ArrayList<Float>();
				ArrayList<String> currencyCodes = new ArrayList<String>();

				ArrayList<String> responseList = skuDetails.getStringArrayList(RESPONSE_GET_SKU_DETAILS_LIST);
				for (String thisResponse : responseList)
					JSONObject object = new JSONObject(thisResponse);
					String productId = object.getString("productId");
					for(String sku: ownedSkus) {
						if(sku.equals(productId)) {
							productId += "[OWNED]";
					Log.debug("[GooglePlayStoreHelper] - GooglePlayStoreHelper::QueryInAppPurchases - Parsing details for: " + productId);
					String title = object.getString("title");
					Log.debug("[GooglePlayStoreHelper] - title: " + title);

					String description = "[ownedSkus:";
					for(String sku: ownedSkus) {
						description += sku;
					description += "]" + object.getString("description");
					Log.debug("[GooglePlayStoreHelper] - description: " + description);

					String price = object.getString("price");
					Log.debug("[GooglePlayStoreHelper] - price: " + price);

					double priceRaw = object.getDouble("price_amount_micros") / 1000000.0;
					Log.debug("[GooglePlayStoreHelper] - price_amount_micros: " + priceRaw);

					String currencyCode = object.getString("price_currency_code");
					Log.debug("[GooglePlayStoreHelper] - price_currency_code: " + currencyCode);

				float[] pricesRawPrimitive = new float[pricesRaw.size()];
				for (int i = 0; i < pricesRaw.size(); i++)
					pricesRawPrimitive[i] = pricesRaw.get(i);

				Log.debug("[GooglePlayStoreHelper] - GooglePlayStoreHelper::QueryInAppPurchases " + productIds.size() + " items - Success!");
				nativeQueryComplete(response, productIds.toArray(new String[productIds.size()]), titles.toArray(new String[titles.size()]), descriptions.toArray(new String[descriptions.size()]), prices.toArray(new String[prices.size()]), pricesRawPrimitive, currencyCodes.toArray(new String[currencyCodes.size()]));
				Log.debug("[GooglePlayStoreHelper] - nativeQueryComplete done!");
				Log.debug("[GooglePlayStoreHelper] - GooglePlayStoreHelper::QueryInAppPurchases - Failed!");
				nativeQueryComplete(response, null, null, null, null, null, null);
		catch(Exception e)
			Log.debug("[GooglePlayStoreHelper] - GooglePlayStoreHelper::QueryInAppPurchases - Failed! " + e.getMessage());
			nativeQueryComplete(UndefinedFailureResponse, null, null, null, null, null, null);

		return true;

You just have to replace or edit your engine java file and it will be used the next time you package or deploy your game, and I encourage you to modify it to your particular needs.

The BP part would be this easy, just search for the tag in the strings.