Read the rest of Simple way of Sharing files between Ubuntu 16.04 and Windows 10 (274 words)
One of the things my team at StackOverflow does is maintain the CI/CD system which builds all the software we use and produce. This includes the Stack Exchange Android App.
Automating the CI/CD workflow for Android apps is a PITA. The process is full of trips and traps. Here are some notes I made recently.
First, [this is the paragraph where I explain why CI/CD is important. But I'm not going to write it because you should know how important it is already. Plus, Google definitely knows already. That is why the need to write this blog post is so frustrating.]
And therefore, there are two important things that vendors should provide that make CI/CD easy for developers:
Android builds can be done from the command line. Hw, but the process itself updates files in the build area. Creating the build environment simply can not be automated, without repackaging all of the files (something I'm not willing to do).
Here are my notes from creating a CI/CD system using TeamCity (a commercial product comparable to Jenkins) for the StackOverflow mobile developers:
The manual way:
CentOS has no pre-packaged Oracle Java 8 package. Instead, you must download it and install it manually.
Method 1: Download it from the Oracle web site. Pick the latest release, 8uXXX where XXX is a release number. (Be sure to pick "Linux x64" and not "Linux x86").
Method 2: Use the above web site to figure out the URL, then use this code to automate the downloading: (H/T to this SO post)
# cd /root # wget --no-cookies --no-check-certificate --header \ "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" \ "http://download.oracle.com/otn-pub/java/jdk/8u102-b14/jdk-8u102-linux-x64.rpm"
Dear Oracle: I know you employ more lawyers than engineers, but FFS please just make it possible to download that package with a simple
wget. Oh, and the fact that the certificate is invalid means that if this did come to a lawsuit, people would just claim that a MITM attack forged their agreement to the licence.
Install the package:
# yum localinstall jdk-8u102-linux-x64.rpm
...and make a symlink so that our CI system can specify
JAVA8_HOME=/usr/java and not have to update every individual configuration.
# ln -sf /usr/java/jdk1.8.0_102 /usr/java/jdk
We could add this package to our YUM repo, but the benfit would be negligible plus whether or not the license permits this is questionable.
EVALUATION: This step violates Rule 2 above because the download process is manual. It would be better if Oracle provided a YUM repo. In the future I'll probably put it in our local YUM repo. I'm sure Oracle won't mind.
The Android tools are compiled for 32-bit Linux. I'm not sure why. I presume it is because they want to be friendly to the few developers out there that still do their development on 32-bit Linux systems.
However, I have a few other theories: (a) The Android team has developed a time machine that lets them travel back to 2010 because I happen to know for a fact that Google moved to 64-bit Linux internally around 2011; they created teams of people to find and eliminate any 32-bit Linux hosts. Therefore the only way the Android team could actually still be developing on 32-bit Linux is if they either hidden their machines from their employer, or they have a time machine. (b) There is no "b". I can't imagine any other reason, and I'm jealous of their time machine.
Therefore, we install some 32-bit libraries to gain backwards compatibility. We do this and pray that the other builds happening on this host won't get confused. Sigh. (This is one area where containers would be very useful.)
# yum install -y glibc.i686 zlib.i686 libstdc++.i686
EVALUATION: B-. Android should provide 64-bit binaries.
The SDK has a comand-line installer. The URL is obscured, making it difficult to automate this download. However you can find the current URL by reading this web page, then clicking on "Download Options", and then selecting Linux. The last time we did the the URL was: https://dl.google.com/android/android-sdk_r24.4.1-linux.tgz
You can install this in 1 line:
cd /usr/java && tar xzpvf /path/to/android-sdk_r24.4.1-linux.tgz
EVALUATION: Violates Rule 2 because it is not in a format that can easily be automated. It would be better to have this in a YUM repo. In the future I'll probably put this tarfile into an RPM with an install script that untars the file.
Props to the Android SDK team for making an installer that works from the command line. Sadly it is difficult to figure out which modules should be installed. Once you know the modules you need, specifying them on the command line is "fun"... which is my polite way of saying "ugly."
First I asked the developers which modules they need installed. They gave me a list, which was wrong. It wasn't their fault. There's no history of what got installed. There's no command that shows what is installed. So there was a lot of guess-work and back-and-forth. However, we finally figured out which modules were needed.
The command to list all modules is:
/usr/java/android-sdk/tools/android list sdk -a
The modules we happened to need are:
1- Android SDK Tools, revision 25.1.7 3- Android SDK Platform-tools, revision 24.0.1 4- Android SDK Build-tools, revision 24.0.1 6- Android SDK Build-tools, revision 23.0.3 7- Android SDK Build-tools, revision 23.0.2 9- Android SDK Build-tools, revision 23 (Obsolete) 19- Android SDK Build-tools, revision 19.1 29- SDK Platform Android 7.0, API 24, revision 2 30- SDK Platform Android 6.0, API 23, revision 3 39- SDK Platform Android 4.0, API 14, revision 4 141- Android Support Repository, revision 36 142- Android Support Library, revision 23.2.1 (Obsolete) 149- Google Repository, revision 32
If that list looks like it includes a lot of redundant items, you are right. I don't know why we need 5 versions of the build tools (one which is marked "obsolete") and 3 version of the SDK. However I do know that if I remove any of those, our builds break.
You can install these with this command:
/usr/java/android-sdk/tools/android update sdk \ --no-ui --all --filter 1,3,4,6,7,9,19,29,30,39,141,142,149
However there's a small problem with this. Those numbers might be different as new packages are added and removed from the repository.
Luckily there is a "name" for each module that (I hope) doesn't
change. However the names aren't shown unless you specify the
# /usr/java/android-sdk/tools/android list sdk -a -e
The output looks like:
Packages available for installation or update: 154 ---------- id: 1 or "tools" Type: Tool Desc: Android SDK Tools, revision 25.1.7 ---------- id: 2 or "tools-preview" Type: Tool Desc: Android SDK Tools, revision 25.2.2 rc1 ... ...
Therefore a command that will always install that set of modules would be:
/usr/java/android-sdk/tools/android update sdk --no-ui --all \ --filter tools,platform-tools,build-tools-24.0.1,\ build-tools-23.0.3,build-tools-23.0.2,build-tools-23.0.0,\ build-tools-19.1.0,android-24,android-23,android-14, extra-android-m2repository,extra-android-support,\ extra-google-m2repository
Feature request: The name assigned to each module should be listed
in the regular listing (without the
-e) or the normal listing
should end with a note: "For details, add the -e flag."
EVALUATION: Great! (a) Thank you for the command-line tool. The
docs could be a little bit better (I had to figure out the
trick) but I got this to work. (b) Sadly, I can't automate
this with Puppet/Chef because they have no way of knowing
if a module is already installed, therefore I can't
make an idempotent installer. Without that, the automation
would blindly re-install the modules every time it runs,
which is usually twice an hour. (c) I'd rather have these individual modules packaged as RPMs so I could just install the ones I need. (d) I'd appreciate a way
to list which modules are installed. (e)
update should not re-install modules that are already installed, unless a
--force flag is given. What are we, barbarians?
The software won't run unless you've agreed to the license. According to Android's own website you do this by asking a developer to do it on their machine, then copy those files to the CI server. Yes. I laughed too.
EVALUATION: There's no way to automate this. In the future I will probably make a package out of these files so that we can install them
on any CI machine. I'm taking suggestions on what I should
call this package. I think
android-sdk-lie-about-license-agreements.rpm might be a good name.
At this point we though we were done, but the app build
process was still breaking. Sigh. I'll save you the long
story, but basically we discovered that the build tools
want to be able to write to
It isn't clear if they need to be able to create files in that directory or write within the subdirectories. Fuck it. I don't have time for this shit. I just did:
chmod 0775 /usr/java/android-sdk/extras chown $BUILDUSER /usr/java/android-sdk chown -R $BUILDUSER /usr/java/android-sdk/extras
("$BUILDUSER" is the username that does the compiles. In our case it is
teamcity because we use TeamCity.)
Maybe I'll use my copious spare time some day to figure out
-R is needed. I mean... what sysadmin doesn't have
tons of spare time to do science experiments like that?
We're all just sitting around with nothing to do, right?
In the meanwhile,
-R works so I'm not touching it.
EVALUATION: OMG. Please fix this, Android folks! Builds should not modify themselves! At least document what needs to be writable!
At this point the CI system started working.
Some of the steps I automated via Puppet, the rest I documented in a wiki page. In the future when we build additional CI hosts Puppet will do the easy stuff and we'll manually do the rest.
I don't like having manual steps but at our scale that is sufficient. At least the process is repeatable now. If I had to build dozens of machines, I'd wrap all of this up into RPMs and deploy them. However then the next time Android produces a new release, I'd have to do a lot of work wrapping the new files in an RPM, testing them, and so on. That's enough effort that it should be in a CI system. If you find that you need a CI system to build the CI system, you know your tools weren't designed with automation in mind.
Hopefully this blog post will help others going through this process.
If I have missed steps, or if I've missed ways of simplifying this, please post in the comments!
P.S. Dear Android team: I love you folks. I think Android is awesome and I love that you name your releases after desserts (though I was disappointed that "L" wasn't Limoncello.... but that's just me being selfish.). I hope you take my snark in good humor. I am a sysadmin that wants to support his developers as best he can and fixing this problems with the Android SDK would really help. Then we can make the most awesome Android apps ever.... which is what we all want. Thanks!
After having gone through all holiday email it is now time to go over some of the briefings. The Rubrik briefing caught my eye as it had some big news in there. First of all, they landed Series C, big congrats, especially considering the size, $ 61m is a pretty substantial I must say! Now I am not a financial analyst, so I am not going to spend time talking too much about it, as the introduction of a new version of their solution is more interesting to most of you. So what did Rubrik announce with version 3 aka Firefly.
First of all, the “Converged Data Management” term seems to be gone and “Cloud Data Management” was introduced, and to be honest I prefer “Cloud Data Management”. Mainly because data management is not just about data in your datacenter, but data in many different locations, which typically is the case for archival or backup data. So that is the marketing part, what was announced in terms of functionality?
Version 3.0 of Rubrik supports:
When it comes to physical SQL and Linux support it is probably unnecessary, but you will be able to backup those systems using the same policy driven / SLA concepts Rubrik already provides in their UI. For those who didn’t read my other articles on Rubrik, policy based backup/data management (or SLA domains as they call it) is their big thing. No longer do you create a backup schedule. You create an SLA and assign that SLA to a workload or a group even. And now this concept applies to SQL and physical Linux as well, which is great if you still have physical workloads in your datacenter! Connecting to SQL is straight forward, there is a connector service which is a simple MSI that needs to be installed.
Now all that data can be store in AWS S3 and for instance Microsoft Azure in the public cloud or maybe in a privately deployed Scality solution. Great thing about the different tiers of storage is that you qualify the tiers in their solution and data flows between it as defined in your workload SLA. This also goes for the announced Edge virtual appliance. This basically is a virtualized version of the Rubrik appliance, which allows you to deploy a solution in ROBO offices. Through the SLA you bring data to your main data center, but you can also keep “locally cached” copies so that restores are fast.
Finally, Rubrik used mirroring in previous versions to safely store data. Very similar to VMware Virtual SAN they now introduce Erasure Coding. Which means that they will be able to store data more efficiently, and according to Chris Wahl at no performance cost.
Overall an interesting 3.0 release of their platform. If you are looking for a new backup/data management solution, definitely one to keep your eye on.
"Rubrik landed new funding round and announced version 3.0" originally appeared on Yellow-Bricks.com. Follow me on twitter - @DuncanYB.
This week I had the pleasure to join Pete and John again on the Virtually Speaking podcast, together with Ken Werneburg. We spoke about the upcoming VMworld event in Las Vegas. Throughout the show there are tips around sessions and vendors to look out for on the show floor. I think it was an interesting conversation…
In Google Search, our goal is to help users quickly find the best answers to their questions, regardless of the device they’re using. Today, we’re announcing two upcoming changes to mobile search results that make finding content easier for users.
Two years ago, we added a mobile-friendly label to help users find pages where the text and content was readable without zooming and the tap targets were appropriately spaced. Since then, we’ve seen the ecosystem evolve and we recently found that 85% of all pages in the mobile search results now meet this criteria and show the mobile-friendly label. To keep search results uncluttered, we’ll be removing the label, although the mobile-friendly criteria will continue to be a ranking signal. We’ll continue providing the mobile usability report in Search Console and the mobile-friendly test to help webmasters evaluate the effect of the mobile-friendly signal on their pages.
Although the majority of pages now have text and content on the page that is readable without zooming, we’ve recently seen many examples where these pages show intrusive interstitials to users. While the underlying content is present on the page and available to be indexed by Google, content may be visually obscured by an interstitial. This can frustrate users because they are unable to easily access the content that they were expecting when they tapped on the search result.
Pages that show intrusive interstitials provide a poorer experience to users than other pages where content is immediately accessible. This can be problematic on mobile devices where screens are often smaller. To improve the mobile search experience, after January 10, 2017, pages where content is not easily accessible to a user on the transition from the mobile search results may not rank as highly.
Here are some examples of techniques that make content less accessible to a user:
By contrast, here are some examples of techniques that, used responsibly, would not be affected by the new signal:
We previously explored a signal that checked for interstitials that ask a user to install a mobile app. As we continued our development efforts, we saw the need to broaden our focus to interstitials more generally. Accordingly, to avoid duplication in our signals, we've removed the check for app-install interstitials from the mobile-friendly test and have incorporated it into this new signal in Search.
Remember, this new signal is just one of hundreds of signals that are used in ranking. The intent of the search query is still a very strong signal, so a page may still rank highly if it has great, relevant content. As always, if you have any questions or feedback, please visit our webmaster forums.
I’ll have more to say about this topic later this week1, but the third edition of Managing Humans is out there. I updated the now rather silly site which first promoted the first edition of the book. Briefly:
You can buy the in a variety of formats right here.
The Network Time Protocol (NTP) is a protocol for synchronizing the clocks of computer systems over packet-switched, variable-latency data networks. NTP uses UDP port 123 as its transport layer. It is designed particularly to resist the effects of variable latency (Jitter).
Read the rest of Install and configure Network Time Protocol (NTP) Server,Clients on Ubuntu 16.04 Server (906 words)
I am bad at finishing.
Many of my pieces I write start out like the morning when I began this article: an early morning caffeination session where I’m bouncing aimlessly around the Internet when I discover a thought. An exciting thought appears out of nowhere and not only is it intriguing, but I can instantly see how that thought could be expanded and built into an article.
The majority of the time I find whatever pen or keyboard is nearby and pound out 100+ words as quickly as I can. The majority of the time I will never finish the piece. The Rands slush pile is deep with half-started ideas, and while a slush pile is a point of pride for me, I’m certain there are great half-written pieces that I will never finish.
The thought that started this piece was this image of a graph that helped explained the reasons I am bad at finishing, and it looks like this:
The horizontal axis is elapsed time; the vertical is joy. Elapsed time is the current amount of time spent on the the piece. Joy is how much I love the work at this particular point in time. There are four distinct states within this graph:
Peak Joy. Ohmigosh. Ohmigosh. Ohmigosh. Such a good idea. Must write it down somewhere quickly because this idea is precious, it’s unique, and if I don’t write it down quickly, it will vanish from the universe forever. This is a steep and satisfying part of the graph, but it quickly bends into our next phase.
Building Depth, Giving Shape. The reality is after capturing the original joyful thought that I only have 10 to 20% of the piece. I usually have a bit of the middle, but no beginning or end. Sometimes the thought is the title, often not. The second most important phase of writing this piece is when I’m building a form for the thought – I’m giving it shape. While the joy per second is decreasing over time, I still have a head full of residual energy from the first phase. The words are still pouring out of my fingers, and it’s in this phase that I discover whether or not the original thought has depth.
This phase is also the one where most pieces die. It has to do with the slope of the curve. I always start building depth full of vim and vigor, but after I’ve written a couple of hundred words, how am I feeling? How much of that original energy still exists in my head? Is the original idea still producing additional smaller supporting exciting ideas? Do I feel I have more to say? How close to done do I feel?
My sense of the current half-life of joy regarding this new piece determines whether or not I’m incentivized to finish. What I’ve learned in the past decade of writing is that even with 1000+ words written, I’m not even 50% done, and if the joy isn’t there, I won’t finish. I’ll explain.
The Slog. The good news about entering the Slog is that I’ve entered this phase. The Slog is the part of writing where I believe I am filling in the unimportant parts. The bones – the depth – of the piece are written, and I’m filling in the gaps with obvious connective tissue, so the piece makes sense. Boring.
The Slog is an essential part of finishing a piece, and while the joy per second is lower, the work here can result in new original work within the piece. For example, while I had a good idea about what to call each phase of the graph before I started writing, I didn’t find the final title until I was slogging through the words.1
The slope of the curve doesn’t change much during the Slog. If I’ve started the Slog, it means that I’ve decided there is enough potential in the piece that I’m willing to slog through the middle and perhaps begin the laborious process of finishing.
The Endless Finish. The reasons I am bad at finishing have evolved. When I started regularly blogging, I was bad at finishing because I simply didn’t do it. The third edition of Managing Humans was just published, and I am still rewriting and finishing chapters in that book that I firmly believed were done years ago.
Getting through the Slog is work, and Old Rands used to believe the reward for getting through that phase was hitting the publish button. This practice resulted in poorly written half-thoughts littered with grammar and spelling errors. Thanks to the hard work of a great many individuals who have helped edit pieces I’ve discovered the last 10% of the work is the difference between a good and a great piece.
This leads to the current reason I’m bad at finishing. Take a look at that graph again.
While you should be suspect of this graph because it is solely drawn to support this article, what I’ve learned in the past decade of writing is, “When you think you’re done writing a piece, you’re only 50% done.” It’s that math that I’m weighing as I finish adding depth. Do I see enough value in a piece that I’m willing to double the amount of time I’ve already spent on a piece to finish it?
Whether it’s writing an article or building a feature in software, the work of finishing is both the most important and the least interesting. My early reluctance to engage with an editor is the same gripe engineers have with building unit test, fixing bugs, and documenting their code. We told ourselves the same story, It works… it’s good enough, but what we were really saying was, the interesting work is done.
If you’re shooting for good enough, not finishing is a great strategy. If you’re shooting for great, then you need to finish. You need to find an editor or a code reviewer who will take the time to rigorously critique your work. You need to listen carefully to that critique and not react with emotion, but understanding. It’s that understanding that will give you a better picture of your strengths and weaknesses so that next time around you’re aware of where you are likely to make mistakes or become lazy. Repeated useful critiques are how you become better at your craft.
The time spent finishing feels intolerable because it’s the hardest work. Joy can sustain you through the hard work of finishing, but here’s the secret: there’s a whole other source of satisfaction that arrives when the results of your hard work are appreciated by your audience…
Therein lies the real joy.
After the recent puppet-lint 2.0 release and the success of our puppet-lint 2.0 upgrade at work it felt like the right moment to claw some time back and update my own (11!) puppet-lint plugins to allow them to run on either puppet-lint 1 or 2. I’ve now completed this and pushed new versions of the gems to rubygems so if you’ve been waiting for version 2 compatible gems please feel free to test away.
Now I’ve realised exactly how many plugins I’ve ended up with I’ve created a new GitHub repo, unixdaemon-puppet-lint-plugins, that will serve as a nicer discovery point to all of my plugins and a basic introduction to what they do. It’s quite bare bones at the moment but it does present a nicer approach than clicking around my github profile looking for matching repo names.