Buying a Porsche on eBay and driving it to Israel – Episode 2

In the first episode i described our first attempt to bring Yoram Porsche from Saint Louis to Israel. 
This first attempt turned to be an eventful rollecestor starting from realizing that were stacked, going through multiple attempts to fix the situation, till the point where we decided that it will be best revert back to where we started with the great help of the original Car Sellers, the angles of the trip. During this experience we also learned to deal with some really “bad Karma” experiences such as being busted by a local police, being thrown away from the hotel etc. During this experience we also met lots of good people and many others who supported us morally starting with our family and friends that supported us every step of the road.
Interestingly enough on my way back at the airport i learned from a nice lady that i met at the United Lounge that our story resembles an Israeli movie named Metalic Blues. I found it quite amusing – i can definitely see how the background for our trip could serve as a good inspiration for another movie of that sort – but  to make it more entertaining well have to modify the script quite a bit – changing the police with a Mafia, Yoram could be the car dealer and i could play Ivgi – actually that’s not that far from reality  🙂
Anyway in the last episode we didn’t accomplished the mission of bringing the car to NY.
This is now our second chance to accomplish this mission. The plan was that Yoram will join me to one of my next trips in order that i could escort him again.
 
We took few lessons from the last attempt.
  • Don’t drive over holiday or long weekend
  • List a couple of garage on the road that you could call in case of emergency 
  • Put enough time margins to allow better flexibility incase something goes wrong
And of course – the main lesson is expect the unexpected. The last experience did a great job on this regard so i felt that we don’t need any additional mental preparation 🙂
During the six weeks following our trip, Robert, the seller, replaced the master and slave cylinder and its line to make sure we are covered on this front from A to Z. Thank you Robert!
To minimize the chance for bad karma we also decided that this time well keep quite, below the radar until well know that reached a safe harbor.
A Lucky Start 
Our original plan was that Yoram will take a private ride and will join me to one of my next trips.
Two weeks before the trip we got involved with an opportunity which required Yoram help.
It just so happened that the opportunity happen to be based in Saint Louis! 
This was a first sign that the this time the Karma might actually work on our side.
The Planning 
We had to adjust quickly our travel plans to take advantage of this opportunity.
To minimize the travels over weekend and at the same time finding the right timing that well fit Nati’s meeting schedule we decided that this time well split the ride. 
Yoram will drive from Saint Louis to pick the car and well start driving the first lag of the trip on his own. We figured that Indianapolis would be the right place to join forces.
Day 1 – Carlinville, here I come
After a very good visit to the prospect and having a great discussion topic for the lunch breaks, I drove my rental to the nearest Hertz office to Carlinville were Robert is based … 45 miles distance. 
While waiting, I did not miss the chance to check out the local ice-cream counter while waiting… you could feel the Halloween is around the corner…
 
 
I told Robert this is were I have to return the car and Robert did not hesitated to offer me a ride back to his place – sweet 🙂
During my preparations to this trip, I did a list of things that may bring the car to a stop and require us to have a part 3 to this trip, the top item that came up in the old fuel lines that may start leaking fuel at some point and combining this with the long drive which may get the engine very warm, it could cause an engine fire and not only stop the car, but can also burn the engine.
The recommended precaution measure was replacing the fuel lines at the engine bay, not a hard job, but still a job requiring several hours of labor.
When I mentioned that to Robert, he happily volunteered to assist me with it and once we arrived to his garage, we immediately got down to business and made the replacement. Robert’s help there was again instrumental…
Night at Carlinville
We parted late at night and I drove the car to the near by Carlinville Motel – Carlin Villa. 
It was no Hilton by any measure, but being dead tired as I was, I slept like a baby.
The next morning, after a quick breakfast, I drove to meet Robert at his work place – a John Deere branch, went over a last minute checklist and made sure all fluids were topped off. Robert and I parted out and Robert wished me good luck and told me I can contact him if I run into any trouble again.
Heading to Indianapolis
My first destination was Indianapolis, over 300 miles away.
I drove alone starting with about an hour in small country roads, from there to Interstate 55 and later to Interstate 70, which will be the main highway all the way to Pennsylvania.
The drive went smoothly and I drove relatively slowly and with caution to make sure I get familiar with the car and am ready for any unexpected warning sign…
By early afternoon I arrived at Indianapolis. 
Nati and I scheduled to meet at Indianapolis right after he was done with his business meetings. His flight ETA was 1AM
Talking with my wife, she had a “great” idea of how I can spend the several hours I had… she found me a near by (almost 50 mile away) huge shopping mall and started sending me ideas to stuff she would want, not to mentioned that she also asked me to surprise her with more stuff…
At the mall when I spoke with Nati before his flight took off, Nati had a brilliant idea… Let’s skip the night sleep and drive straight to Columbus Ohio…
By 7 pm I came back from the Mall and found a cheap motel to lay my head for a few hours until Nati flight lands. I also suggested Nati to reconsider his plan and at least come back with me to the motel to freshen up before the flight, but Nati dismissed the idea without second thoughts…
 
Day 2 – Driving from Saint Louis and Meeting at Indianapolis
Picking up Nati and driving to Ohio (Yoram).
Remembering the last time at STL were I circled the terminal dozens of time waiting for Nati until the car stopped, I waited this time for Nati to let me know he landed before I drove. In addition, his 1am flight was probably the last flight for the day since the terminal was empty and no one pushed me to drive away once I parked in front of the arrival gates.
Once Nati came out, we immediately started our journey. 
Meeting at Indianapolis (Nati)
I had few important meetings on Thursday morning so i took the 4PM flight from San Jose to Indianapolis which arrives at 1AM.
Driving over a sleepless night (Nati)
As soon as i realized that i’ll be going with the red-eye flight we made another adjustment to the plan – rather than parking at Indianapolis well take advantage of the night and drive through to the next lag right from the airport.
Settling down (Nati)
To make the trip entertaining we had to had some good music to escort us through this long journey.
We called Eliza again to be the Chief Music Officer and she set us with a nice playlist which you can find here… Even though the car is 30 years old the owner fixed an audio cable which was good enough to plug my iPhone as media device.
Unfortunately the cable plug had a loose ends issues so i had to use my electric engineering skills and fix this while Yoram was still driving. I only had finger sizer at my disposal but that did the trick and after an hour of trying to find the right spot where the loose ends was broken i was able to fix the cable and we were ready to rock & roll!    
Spontaneous drive 
Based on the last experience and the fact that we took a sleepless night drive we didn’t really new how much we could push on the first day.
So we basically set no specific target and decided to drive until well feel exhausted. And that’s what we did.. we went from one leg to the other, did short coffee stops until we decided that its time to meet the beautiful nature around us.
Our first stop: Columbus the capital of Ohio.
We arrived at 5AM to Columbus so it was still pretty dark outside and most of the stores were closed.
We were able to find a nice coffee place near the university area and had a good salmon bagel and coffee a pretty popular combo..
Getting Busted by a police – Again!
We went from Colombus Ohio on our journey toward our next stop.
As we crossed from Ohio to West Virginia we saw lots of police cars on the side of the road.
Yoram and me started joking that it would be funny if we will get basted again as in last time.
Few minutes after we joked about it we saw the siren light on our mirror and indeed we got pulled away again! – Lesson to self – be careful with what you wish for!!
This time the police man stopped us because we had a paper sticker on our rear window instead of a plate and that got the policeman attention.
Unlike the previous time we became friend with the policeman who liked our story so we ended up just with a warning ticket and some hiking recommendation in the area.
First Break – Biking in the great scenery of  Wilderness Voyageurs
West Virginia at this time of year is so colored with Stunning Colors of Autumn 
We decided that we had to find a place to stop by and get a real feel of this great views.
A quick search in Trip Advisor brought Wilderness Voyageurs as the spot we were looking for.
Wilderness Voyageurs. is a nice place for whitewater rafting and biking which sounds a perfect fit.
Unfortunately we could’t take the whitewater experience as we came in too late so we settled for a nice bike ride along the river..
As you can see in the picture the scenery was absolutely amazing and so was the ride..
From  Wilderness Voyageurs to Harrisburg
Encouraged by our accomplishment so far and the smooth experience we decided to set an ambitious goal for our first day ride and aim to Harrisburg where we planned to station through the night.
First stop Lunch – Family restaurant
The road is situated in a fairly un populated area so finding a restaurant along the road wasn’t an easy task especially since we wanted to find something with some local taste.
We found a nice family restaurant who served traditional food.
 
 
Washing the car 
After lunch Yoram decided that we have to wash the car.
A quick search in google brought us to a self service car washing experience.
It was quite an interesting experience – one of the things that clearly would never work in Israel.
We had to spend good amount of time to learn the system and after that the process was fairly smooth. You swipe your credit card and than switch a selector which push the soap or high pressure water for washing. After 20 min we got a shiny car. Were now ready for our last stop for the day.
Ending a 750 miles ride at Shitty hotel 🙂
I did a quick search in Trivago and found Howard Johnson a 60$ hotel which also offered breakfast at that price. We decided to give it a shot and add chip motels to our american experience checking list 🙂
Well theres no surprises – the motel was quite old and stinky but we had too large beds and a shower all what we needed after a long day of driving. At first we planed to sleep for an 1.5 hour just enough to get us filled for tour in the city. As you could imagine once we got to the bed the best party i could think of was sleeping and indeed this was the best sleep i had for the entire trip.
The breakfast was surprisingly good pancakes and cornflakes and even fresh banana just enough to get us filled for the morning.
  
Day 3 – Drive through to New Jersey and New York
Knowing that we passed more than 2/3 of the road in just over a day gave us the confident that indeed this time the Karma is with us and we can be much more relaxed. Having a good sleep also helped to get us fully energized for the next day.
You haven’t experience the american experience if you haven’t experienced shopping at the Jersey Outlet Gardens..
With that state of mind we figured that this would be a great day for shopping at the famous new Jersey Outlet and buy some stuff for the family back home who gave us the support and moral help. 
This is where i discovered that Yoram shopping skills far exceeds mine, a heritage i guess of living 6 years in NY. By the time i was still wondering around looking for American Eagle he almost finished his entire shopping duties! 
The coupon Experience 
Coupon is a strong part of the buying clatterer in the states.
Interestingly the mall have a special arrangement for tourists  in the form of a coupon book. The cuopons are provides fairly substantial discount in many stores but requires a bit of planning if your really want to make the best out of it.
Burger & Milkshake @ Johnny Rockets
You can’t feel the  American experience without being at a good Burger Restaurant. Johnny Rockets is considered a classic on this category. – We ordered a good milkshake which is served fulll glass plus another glass with the spares ! that by itself got me full so i ordered a salad and Yoram took a small traditional hamburger.
We ended the first part of the shopping happy and full and now were set for the closing round….
The shopping turned out to be more exhausting experience than the entire drive! 
Another change in plans
We ended our shopping experience around 17:00 – at this time i started to digest the fact that were few minutes from the EWR airport and the next united flight home leaves around 11PM enough time to jump on it – and indeed that’s what i did.
Yoram continued to NY and i was able to take the flight home and arrive a day earlier.
Finally at NYC (Brooklyn)
I arrived by myself to the hotel and stayed the night. In the morning I drove to the city, met Eliza who prepared out playlist we used during the trip and we had a morning coffee together..  
Driving to the Porsche shop @ Nyack
From there, I drove over to Nyack, NY, another good 50 miles north of the city to bring the car to it final destination for the next several weeks. 
This was a Porsche shop specializing in 944 model that I got recommendations for being honest, reliable and professional and therefore, I decide to leave my baby with to get all the maintenance done on it before shipping it over to Israel. This is very important since this car is very rare in Israel and any parts it may need, has to be imported from abroad. In addition, no mechanic in Israel knows this car the way this shop does…
 
I decided to make a short stopover on the way to Nyack on the Palisade highway at a place it let you drive by the Hudson and take some photos of the cars there before I say goodbye. 
Once I arrived, I saw a NY state trooper in his car and my old Porsche without license plates immediately triggered his attention. He stopped over next to me, did not even get out of his car and start asking me questions. I told him the story and he seemed quite happy and impressed by it. He told me the car looked great and that he also has an 80’s car he keeps as a hobby – a Delorean… He did not even check my documents an gave me the green light to continue my trip.
 
Once I arrived at shop at Nyack I met Nyol whom I talked over the phone many times before and he took me to a tour of his small kingdom (on a Sunday!). It way like a Disneyland for old Porsches and he had all kind of suggestions and improvements I could make to the car – I felt like a kid in a candy store.  Oh, I wished I could have stayed there longer and also had the budget for taking on all of his suggestions, they were all good and it was obvious he knew what he was doing…
Once we wrapped up at the shop, Nyol drove my over to the train station. 
I took the train back to NYC. Weird feeling to leave the car and go by public transportation after the last few days and also a bit sad to know I will not see it for at least four months…
Final Words
This trip was a great experience that i will never forget. It was also a lesson on persistency, optimism and friendship along with great views of parts of America that i didn’t knew before.
I hope that despite the difficulties this story will serve as an inspiration for other to follow.
References:
Posted in Uncategorized | Leave a comment

Buying a Porsche on eBay and driving it to Israel – Episode 1

How the hell did I get to buy a 30 year’s old Porsche..
It all started on a great evening a couple of months ago… I was joining friends to a GoCart event, which was a blast and afterwards the guys gathered for an after party drinks and burgers.
A friend of mine who has an Antique Corvette he renovated a few years back mentioned that one of the classic cars of the 80s was becoming 30 years old which meant it could be imported to Israel. He said prices for it were quite reasonable and that it was a great driving vehicle, the Porsche 944. I immediately became alerted; this was one of the cars I grew up admiring in the 80s along side icons such as the 911 and the Lamborghini Countach…
The next few nights I was online researching the car, its prices, users feedback and import procedures… and a week later I already went to see one car that someone already imported to Israel and wanted to sell.
To make a long story short, buying in Israel did not materialized and I started researching eBay and craigslist, finding relevant cars and interacting with the sellers to find out more details.
After a couple of weeks, I zoomed in on an 1986 Porsche 944 Turbo model which was much more powerful, but meant waiting a few more months since it was just started in 1986 (you need it to age to 30 years for importing it to Israel).
After watching many auctions on eBay and understanding prices and typical things to watch for, I engaged in conversations with a guy from Illinois who has an 86 Porsche Turbo with much miles on it, but in a reasonable condition and at a good price.
I decided to pull the trigger and bid for it and… won it.
I bought a Porsche waiting in the USA, now what…
This was only the beginning… Now I started planning how to bring it to Israel in the fastest, cheapest and most enjoyable way…
I decide to drive it over from Illinois to NY, bring it to a shop I got recommendations for up in Nyack NY and get it in good condition before shipping it over to Israel.
Logistics is quite important here… importing an old car to Israel, it has to be at least 30 years old. If it is less than 30 years you have to store it until it gets to 30 which can be quite expensive.
In addition, import taxes in Israel are extremely high. You pay 117% taxes on your imported car.
Furthermore, the tax is not only on the car, it is also on transporting it in the US, shipping it to Israel and any upgrades you do to the car (normal maintenance and repairs are not taxed).
Therefore, I decided to drive the car over from Illinois to NY to save on transport cost and also to know of any repairs that needed to be made before I ship it over… This was the beginning of quite an adventure…
Prior to biding on the car I talked with the seller multiple times to make sure he was honest and also to ensure I can leave the car with him for a few weeks until I get to the US.
He was very friendly and helpful and I got comfortable doing business with him.
Upon winning, I coordinated a date I will come and meet him.
A couple of days before the flight, I contacted him again to confirm and he told me his brother will meet me since the car was in his name and he was available to meet,
The Original Plan:
Drive over 1,000 miles over the course of less than three days bring the car from Carlinville, IL to NYC (dropping off Nati @ Newark airport on the final lag). Not to mention that the first day included also driving from St. Louis to Carlinville, getting the car and driving back to St. Louis to pick up Nati and start the trip…
hackpad.com_o0LVOsgsJoZ_p
Day -1 Picking the Car – All Hell Break Loose  
4:30am – Landing @ STL airport
I took a red eye flight with a connection and landed at 4:30 am. Took a cab ride over to Illinois, a good hour and a half away ($140, same price as renting a car and drop it off at a different state).
I told them I would come early. Since I did not want to disturb them so early, I would stay at a local diner in town called Abella’s.
It was a small diner with all local characters, but me. I spent there several hours – a few more than I originally planned, but got the local atmosphere this way… 
11:00am – meeting the seller’s (Robert) brother (Kyle) and their dad
After much wait, finally I see Kyle (brother) and his dad entering the dinner.   Went outside with them and entered  a huge pickup truck full of tools and work items.
By 11am we did a test drive, the car exterior looked very good for a 30 years old car. Interior less so. It will require quite a bit of TLC when it gets to its new home…
The car drove nicely, power steering meant you have to use lots of power to steer… It also has some hard vibration when just starting up in first gear. Other than that, the car drove very nicely, it was fast, handling was firm and when the turbo kicks in, it was much much fun.
12:00pm – Closing the transaction
Afterwards, we switched to closing the transaction, we went over to the bank, notarized a bill of sale which was also an experience… very nice old lady did it, took both our fingerprints, invited one of the girls in the bank to be a witness and sealed the bill of sale for us, she did not want any money for her service even though none of us had any business in that bank. From there we went to the local DMV, where two clerks were doing everything from testing and checking new drivers, theory testing, eye sight testing and for us, switching the ownership and providing me with a drive away temporary license. All for $10…
From there, I took the car, checked fluids and drove it to meet Nati @ the STL airport. I had two and half hours to drive over. The car was good and I got there an hour ahead of time. I ate lunch @ a near by Applebee’s, took a photo of the car and sent it over to Israel to let everyone know everything is on plan… boy did I jinxed myself…
My car at the Applebee's parking
16:30 – At the airport
From there I drove straight to the airport.
Got there right at the time Nati landed, but since he had checked bags, it took him over 30 minutes to get out… during that time I circled the terminal about 10 times since police did not allow me to stop for more than a couple of minutes in front of the terminal… on my last round, after stopping at light 10 meters from the pickup spot, I heard the engine stops. I lifted my leg from the clutch paddle and the paddle stayed down… clutch failure… running quickly through all I read during my research I was quite sure what the issue was… master or slave cylinder which were the hydraulic pump of the clutch… 
Nati’s side…
9:00 AM – Picking the car
Yoram arrived to pick up the car few hours before I planned to join him after a flight from San Francisco.
Everything looked promising and there was almost no sign for the drama that were going to experience.
17:00 – Joining forces 
Yoram called me as soon as I landed and said that he will wait for me at the entrance after I’ll collect my luggage. I had Eliza nominated as the “Chief Music Officer” and she already had few song which I really liked ready to kick the trip. Yoram bought few repairs and tools to make sure that we would be ready to fix things if things would go wrong.
We thought we had it all ready to go but nothing prepared us for what about to become a much bigger drama than what we expected. 
18:00 – The Drama Begins..
As soon as I arrived to collect my luggage Yoram called again. This time he sounded more concerned. Yoram said the car is stuck and that he believes it is a serious thing as it looks like the clutch is broken.  At that time I was in a bit of a denial mode, expecting that soon it will all be behind us and we can start the trip. As soon as I got out of the terminal I found Yoram stuck with the car with all the insurance papers that he prepared a head of time in the back of the car. 
A tough looking Missouri police man was standing next to him urging him to leave the scene and making comments that we’re not going to be able to handle the situation ourselves.  At that time Yoram also lost his cellular data plan so he couldn’t use the internet to find a local help and more information on how to handle the clutch break. Yoram was trying to get a tow driver through the insurance but not very successfully. The police man started to loose his patience. Ordering a tow driver that works with the police to take us away from the airport. At this point we had no idea where to take the car to. We are also under time pressure as we knew that the tow truck driver can’t stay with us for too long. 
Luckily the  tow driver David turned out to be  – “an angle” and when he saw the situation we were in he did his best to help.  He called friends to see if there’s any garage nearby. He brought us to one that was open and may be able to assist but as we found out US Garage tend to be fairly strict when it comes to the type of cars they are willing to handle. 
At this time the thought that we are in “deep shit” started to sink-in and  we realized that we are not going to get out of here soon – so in parallel we started discussing plan-B and C.
19:00 All the odds plays against us 
Here is a summary of the current situation – 
  • Long Weekend – all the garage places are closed for a long weekend.
  • Porsche is not a common car and many garage places that were open refused to deal with the car. 
  • No word from the Robert.
  • All the options to tow the car to NY looks unrealistic.
  • Arrrrrgh…..

Stuck @ the airport

Poor Yoram as much as he tried to plan for the worse he never saw this one coming and so soon.
The good news though is that if this would have happened somewhere down the road we would have probably been in a much worse situation.
19:30 Staying at Saint Louis
We had to find a quick resolution as the tow driver had to leave.
David came with the idea of putting the car in one of the Porsche dealers. I booked a hotel near the air port. David took us to the hotel and was kind enough to take the car to  the Porsche Dealer all by himself!
Yoram and I got to Holiday- Inn hotel near the air port – Yoram fell asleep almost immediately after two days of almost no sleep and lots of tension build-up in the past hours that exhausted him completely. I took an advantage of that and went to relax in the hotel small swimming pool and got myself energized to handle the situation tomorrow.
Summary of Day 1:
The thought that we are NOT going to be able to ride the car starts to sink in even though we still gave 15-50% chances that we’ll fix the car and we’ll be able to drive at least part of the road.
In parallel Yoram is working on a plan to tow the car to NY through a forum that he signed up to a day before and got some interesting proposals that made this option within a reasonable range.
So day 1 ends with the realization that were in big trouble but we were still cautiously optimistic that we will get over it in one way or the other.
Day 2 – The drama turned into an eventful rollercoaster 
Renting a Car – first sign that we started to realize the situation were in..
Yoram had few hours of sleep and got up early to start working on plan – B.
When I woke up we decided to kick the day with a relaxing tour in the area, which was by itself an interesting insight into the lower middle-class way of living.  During this walk we realized that we are probably going to stay here longer so we better of rent a car. 
Porsche Dealer – our first hope
We rented a car from a close by Hertz and drove to the Porsche dealer – we found the car exactly where David the tow driver told us.
As soon as we got in we found out dealer stopped serving Porsche 2 weeks ago, sold all his parts and is now refusing to serve Porsche 😦
Contacting the original seller  – our second hope 
In his despair Yoram made more explicit contact with the original seller, Robert, asking for his assistant. Robert got back to Yoram eventually and started to show signs that he’s coming to help. Still without strict commitment as he was busy handling the troubles with his own car. With that positive sign the odds to fix the car went up to 50%.
10:00 AM – Current state 
In summary the car is stuck at the Porsche dealer which is no longer a Porsche dealership. We don’t have an alternative plan at the horizon so our only hope was to rely on Robert who made signs he is coming to help but not definite ones. 
11:00 – Making the best out of the situation 
The lake
At this point Yoram thought that Robert is still our best option and since we had nothing else to do anyway, we better off find something nice to do in the area. 
I jumped on the opportunity and found us a nice place 20 min ride away which rents Kayaks and SUP boards at Creve Coeur Lake Rentals, Marine Ave.
Lake location
We ended up picking the SUP boards and spent an hour rowing in a nice lake not far from the Missouri River brunches. Even though I’m used to Kayaking I never tried SUP so to both myself and Yoram that was the first experience which made it even more fun. The conditions were perfect for beginners as it was pretty much a flat river how turned into a lake so stability wasn’t too difficult of a challenge as it is in the sea water. 
12:00 – The second hope becomes more real
Were starting to get more signs of life from Robert who texted Yoram with instructions where to buy the broken part and also mentioned that he will be willing to carry the cost for it. Those signs made us feel more confident that he will drive over to fix the car. 
12:30 – The “Ice-cream Natzi”
There are few hours left before Robert & Kyle will get to our area so we decided to grab some local icecream. With Yelp help we found 
Mr. Wizard’s Frozen Custard which turned out to be the equivalent of the  “Soup Natzi from Seinfeld” but in ice-cream.  
 
Soup NatziIcecream Natzi...
The ice-cream is served through small windows. The seller stands behind the wall with only a small window opened. You have limited options to choose from. Outside the temperature reaches 30 degrees. 
I chose a pistachio and Yoram coconut icecream. To pick up the ice-cream we had to turn to the third window and pick it up. It was quite amusing to see how this works as it was the same lady moving from one window to the other – but still – that’s how the system works you know..
15:00 Back to the broken car – Help is finally on its way
Now its time to get back to the car and wait to hear from Robert & Kyle for an update on their plan – note that up to this point we didn’t get a strict commitment if they are coming to help us or not. Its also important to note that the seller had to deal with his own broken car and could come over once he’s done with that.
When we returned to the car we got a final confirmation that Robert & Kyle are on their way and we’ll be in the area in few hours. 
That was the best news of the day.
Yoram suggested that we’ll buy six pack of beer and some snacks to keep them happy.
We headed to a nearby Target shopping mall and started our small shopping spray 🙂
17:00 Help Arrived 
We got back to the car and indeed after two hours i saw the two brothers that Yoram told me about wearing dirty shirts riding a GMC with lots of tools ready at the back. At this time it was the happiest moment of the entire journey, even though they were dirty shirts the appeared to me as nothing less then shiny angles ..
Shiny AngelsWorking on the car
Apparently the too brothers are very handy as they deal with tracks and cars on a daily basis.
The young brother who owned the Porsche seem to know it inside out and could spell part numbers to the dealers by heart.
He took a quick look at the car and realized that indeed they need to put the car on a lift to fix it.
working on the car 2
The car parked at the parking lot of a fairly high-class car dealership location so we realized we had to move the car to a different location – otherwise the dealer will kick us out. But how do you move a car without a clutch? 
We’ll turns out that older car had a simpler gearing system that makes it possible to switch gears without a clutch based on timing – quite an amazing thing to see 
17:30 We got the wrong part 
Kyle found the broken part – not surprisingly that wasn’t the part we bought earlier.
We started to look for the part but since it was a weekend and already late in the day we couldn’t find anyone that could provide us with such a part.
18:00 Fixing the broken part
The two brothers decided to take a shot at fixing the broken part hoping that it will hold for a short while until we will find a new part. 
Rebuilding the slave cylinder
After few hours of work this attempt failed.
work...
19:00 Back to square one 
The two brothers decided to take the car back driving it without a clutch !
This means that were now back in square one – the car is back is in its original location and we’ll have to pick it up again on the next opportunity. 
Obviously the main lesson from this experience is not to plan such a trip over a long weekend as we were fairly hopeless when something went wrong and without a miracle in the kind of the tow driver and the two brothers we would have probably still be stuck today.
19:00 It ain’t over till its over
Ok so we got the car sorted out – now we need to find a place to sleep and get on a plane to NY to get me on time for my connecting flight to Tel-aviv.
It took me few minutes to book a flight and hotel through my mobile apps – quite an amazing story by itself.
We were now heading to the hotel 20 min away from our location.
19:40 Getting busted by a local police
Just before we arrived to the hotel we got busted by the police.
Two flashy police cars stopped me at the corner of the hotel and came toward me with a flash light pointing at my face. His first question was if i drank something – I immediately said no but at the same time realized that we bought six pack of beers and is placed on the car floor next to Yoram’s legs. Luckily Yoram covered them with his legs so the police man saw nothing. After i gave him my desperate look and after checking my driving license he decided to gave up on me – phhhhh!
Nati gets busted
20:00 The hotel saga ..
Great – now were ready to get the hotel and close the day
To our surprise the lady at the front desk seemed to be busy on the phone. The other guy told us that our reservation is not in their system. After a quick check it turned out that the lady on the phone was canceling our reservation simply because the hotel was over booked and she had no rooms left.
This was already 9PM and we literally had nowhere to go. All the hotels in the area seemed completely booked.
The attempt to escalate this with the customer services of IHG proved useless as after 40 min talking to some guy in India who didn’t even know where is Saint Luis i gave up and booked another hotel through Trivago. Our story didn’t end up there – apparently i made a mistake and booked the hotel for next night – so we were about to get thrown away from this hotel as well.
Luckily after just few min they got another room cleared – the only issue was that it was a single room with double bed and were two big man 🙂 – It took me and Yoram few seconds to accept the offer knowing that the alternative would be to go look for another hotel at 10PM. 
22:00 Finally a closing toast 
And here’s the happy ending – we decided to end the trip and the Saint Luis University area which has lots of bars and live music. We walked through the Delmar street and settled for drinks at 
Cicero’s . We had fun and some drinks and got ourselves so exhausted that we fail to sleep almost immediately the minute we got back to the hotel.
St. Louis nightlifeSt. Louis nightlife 2
Over but not done with
Both Yoram and Myself ended the trip even more determent that we will come back to finish what we started next time – so stay tuned… 
Final Words – When the going gets tough the tough gets going ..
Bad things happen when you least expect them. When this happens you have two choices get upset, or cope with the situation and make the best of out of it. This experience was definitely on the far extreme side of the spectrum. In this episode we had lots of things going on in the wrong direction and I’m very proud in the way we handled them and were able turn them into an experience that none of us will ever forget for a long time. We met lots of great people during this path and managed to have fun, weve proved that you can enjoy even bad experience even at times when it looks like “all hell break loose” ..
Which reminds me of a Billy Ocean song back from the days i used to play DJ –  “When the goings get tough..”  Given that we’re in the car of the 80’s having such a song from the 80’s will be a perfect ending for our first episode.. – enjoy!. 
Link to the full Album:
Stay tuned for episode two…
Posted in Uncategorized | Leave a comment

Creating a scalable Blueprinting service using Cloudify

Cloudify is an open source TOSCA based pure play orchestration platform.

Cloudify is commonly used to orchestrate application a various clouds and even bare metal servers.

When I was asked to design SAS offering with a major partner that will feature TOSCA based application orchestration, Cloudify version 3 was already out with significant features and design tenets that made it extremely easy to extend and embed.

At the time, Cloudify was lacking some key elements that we identified as key to provide a large-scale SAS product:

  1. Multi users, multi tenant support and integration with external identity management services
  2. Horizontal scaling of the orchestration engine
  3. Minimal footprint (in terms of resources) per application deployment and VMs within the deployment

We had a goal of releasing the service in two months and although all of these elements were on the Cloudily roadmap, the product was extremely successful and with such success, come many customers demands and features requests and therefore the two months was too aggressive to rely on the product roadmap fulfilling our requirements.

The Approach we took based on the above was the following:

Utilize Cloudify managers as our service Orchestration engine, but wrap it with a facade that will on the one end keep the same RESTful API Cloudify expose, but make it a smart façade that will not only front the Cloduify managers, but will also provide the multi-users multi-tenancy functionality our service required as well as the horizontal scaling. The third point, minimizing the deployment footprint was the only point we left for the core product to address in the short timeframe we had.

We designed the multi-tenancy by using our partner’s token-based Identity service. We created separation between tenants in a single orchestration engine, by creating a hash algorithm that adds uniquely identified tenants encoding into all of the engine keys (such as blueprint IDs and deployment IDs) and filter back to each tenant only it own private data.

We handled the horizontal scaling, by allowing all tenants to access a single point (our façade) and load-balance between multiple Orchestration engines on the backend. We also added stickiness to the load balancing.

Since this service is part of a large public cloud offering by our partner, we were interested to maintain the same user experience and use the same tools used by its customers.Both CLI and UI were addressed. The CLI support was added by adding command verbs to the existing CLI tool.

Even though Cloudify has an impressive UI by its own right, to ensure optimal user experience, we built a new UI from the ground up to match the existing Cloud UI. The amazing part here is that due to the easy RESTful API that we had, the UI was built in a couple of weeks in parallel to the work on the Façade. On the first week it was using Cloudify orchestration engine RESTful API and on the second week, once the façade API was ready, the switch was nearly transparent and took a couple of hours.

Another interesting point I wanted to bring up is about how we manage the operation of the service and it development. Due to the very short timeframe we operated at, we managed the development in weekly sprints and each sprint resulted in a deployment to production of that sprint functionality.

To allow us this aggressive deployment process we basically drink our own Kool-Aid. We created blueprint of the service front end and backend services and use Cloudify to orchestrate the deployment and scaling of our service itself.

Looking back, I am extremely happy with the approach we took.

The Cloudify product has improved in these couple of months and it new 3.2.1 release already supports some of the elements we built ourselves, but others are still in the work and the flexibility we have in fronting the orchestration engines, is something that we continue to take advantage of and leverage to tailor the service to our users.

Posted in Uncategorized | Leave a comment

Orchestrating applications on vCloudAir using Tosca & Cloudify

Tosca is an emerging standard by Oasis for modeling distributed applications orchestration,.
vCloudAir is VMWare public cloud offering.
Cloudify let you automate the workflows on nodes modeled using the Tosca spec.

In this post, I will explain how you can run your first application orchestration on vCloudAir using Cloudify.

We will start by installing the cloudify CLI locally, then bootstrap a Cloudify manager and finally upload and deploy our web application blueprint on to the vCloudAir cloud using the manager.

A video that demonstrate this process can be seen at http://getcloudify.org/vmware-hybrid-cloud.html

To get started, you will need to have vCloudAir account credential for a subscription account (onDemand support will come soon).

In addition, you need to create a Ubuntu template that can support docker (version 14.04 or older with kernel upgrade).
Add to this template the ability to passwordless ssh into it by generating an ssh key and adding the ssh public key to the user .ssh/authorized_keys file.

We will start by installing the Cloudify command line tool (CLI):
1. Make sure you have Python 2.7 and pip package manager (if pip is not installed, please installing using these instructions: https://pip.pypa.io/en/latest/installing.html)
2. Install python virtualenv
pip install virtualenv
3. Create a new virtual environment
virtualenv yournewcfyenv
4. Cd into this new folder (“yournewcfyenv”)
5. Activate the virtual environment
source ./bin/activate
6. Install the Cloudify CLI
pip install cloudify
7. Run cfy –version to ensure the cli tool was successfully installed
c
After we have Cloudfy CLI installed, we want to deploy a Cloduify manager on to our vCloudAir cloud.
Cloudify is built using a pluggable architecture. Support for clouds is provided by a cloud specific plugin, which contains the manager blueprint – a blueprint that installs the Cloudify manager application the same way we will later install our application blueprint.

1. Get the vCloudAir cloud plugin from github
wget https://github.com/cloudify-cosmo/tosca-vcloud-plugin/archive/1.0m2.zip
unzip 1.0m2.zip

2. Go into the tosca-vcloud-plugin folder.
3. Under the manager_blueprint folder, edit the inputs.json .template file and complete all of the empty properties.

{
"vcloud_username": "",
"vcloud_password": "",
"vcloud_url": "https://vchs.vmware.com",
"vcloud_service": "<#########-####>",
"vcloud_vcd": "<#########-####>",
"manager_server_name": "",
"manager_server_catalog": "",
"manager_server_template": "",
"management_network_name": "",
"floating_ip_gateway": "<#########-####>",
"floating_ip_public_ip": "###.###.###.###",
"manager_private_key_path": "",
"agent_private_key_path": "<~/.ssh/vcloud.pem>"

}

4. Save it under the same name without the template suffix
5. To start the actual bootstrap run the following commands:
cfy init
cfy local install-plugins –p ./manager_blueprint/vcloud.yaml
cfy bootstrap –p ./manager_blueprint/vcloud.yaml –i ./manager_blueprint/inputs.json

6. You should see quite a long console output that should finish with an IP address of your newly created Cloudify manager.

Once the Cloudify manager is up, it is time to upload our blueprint, but first, we need to get it
1. Download the example blueprint
wget https://github.com/cloudify-cosmo/cloudify-nodecellar-docker-example/archive/vcloud-plugin.zip
unzip vcloud-demo.zip
2. Upload the blueprint to the manager
cfy blueprints upload –b myblueprint –p cloudify-nodecellar-docker-example/blueprint/docker-vcloud-blueprint.yamlnodecellar_create_deployment
3. Open the manager ip in a web browser nodecellar_deployment_completed(the ip can be obtained by running cfy status)
4. Select “myblueprint” blueprint and a topology view that describes the deployment is shown
5. Click the “create deployment” button
6. In the pop-up dialog fill all the fields and name the deployment.
7. Click deploy and the deployment will start its initialization process
8. Once it is done, select the install workflow inside the deployment view and the install workflow will start
9. When all nodes become green, you application is ready for viewing

 

 

 

In this post we went over a simple scenario of deploying a cloudify manager on vCloudAir and afterwards, upload a TOSCA inspired blueprint and deploy it to the cloud.

This allow you to model your devops procedures in an easy to read and maintain documents that can be executed using Cloudify on vCloudAir, vSphere as well as hybrid cloud environments.

* The Cloudify vCloudAir plugin is under development and this preview is considered an Alpha. In this version, each VM we assign a floating IP to (DNAT rule) holds exclusively this public IP. Therefore to have a manager and the blueprint VM running as explained above you will need at least two public IP that do not have any NAT rules defined. For the plugin release, we will remove this limitation and allow you to define port level NAT rules.
Posted in Uncategorized | Leave a comment

vCloudAir and Cloudify

VMWare has been in the front of the virtualization space for many years and is by far the most popular virtualization solution for enterprises.
In the recent years, as cloud technology became popular, VMWare released its vCloud product line to target private clouds.
More recently, VMware entered the public cloud space too with its vCloudAir product.

Cloudify, cloud orchestration and automation tool has broad support for orchestrating applications on many different clouds with the most popular being Openstack based clouds and EC2.
Obviously adding support for the vCloudAir cloud was an important target for us.
VMWare vSphere & vCloudAir plugins allow for provisioning resources on vCloudAir as well as hybrid cloud support in heterogeneous environment that are running VMWare and Openstack clouds side by side.
It can even Orchestrate applications that span across VMWare and Openstack in the same deployment.

Examining the vCloudAir API it became apparent that the vCloudAir api was heavily influenced from the traditional VMWare products and customer base.
Compared to Openstack and EC2, it is much IT operations focused than developer and devops focused.
This means the API exposes tremendous amount of customization and control power, but at the same time, completing an operation that takes a couple simple api calls, translated to many more calls in vCloud.

In order to support a new cloud in Cloudify, we have to create or customize a plugin that will expose the different objects and interfaces/operation we can do with this cloud.
I my case, as time was short and I wanted to get going as quickly as possible, I chose to use an existing plugin which exposes Apache LibCloud, which itself let you interact with different cloud APIs from python.
Cloudify already has a LibCloud plugin that was used for EC2 API.
I just extended it to expose the vCloudAir objects too.

This is a work in progress and I currently have just the server objects (server_plugin) exposed as can be see here:
Libcloud Plugin

While testing the work I have done with an actual vCloud account, I foudn out that some of the functionality I used in the LibCloud vcloud driver (version 0.15.1) did not agree with the API used for my account. For example, getting network details was not working for me, but listing the networks did work.
Because of this, I had to fork the LibCloud repo and make a few changes to bypass these issues.
My modified version is at:
Modified Libcloud

I used the Cloudify node-cellar blueprint example:
NodeCellar Example

All of the code stay the same and the only difference was focused on the blueprint YAML:

import the libcloud plugin definition:
imports:
- http://www.getcloudify.org/spec/cloudify/3.1m5/types.yaml
- libcloud.yaml
Change the VM type definition to vCloud:
vm_host:
derived_from: cloudify.libcloud.server
properties:
cloudify_agent:
default:
user: ubuntu
key: /home/ubuntu/id_rsa
server:
default:
### if defined, will serve as the hostname for the started instance,
### otherwise, the node_id will be used
#name: no_name ### HOST_NAME""
# image: Ubuntu Server 12.04 LTS (amd64 20140619)
# image_name: Ubuntu Server 12.04 LTS (amd64 20140619)
image_name: ubuntu_1204_64bit
image: ubuntu_1204_64bit
ram: 4096 ### FLAVOR_NAME
management_network_name: CFY-Internal ### Network name
connection_config:
default:
cloud_provider_name: vcloud
access_id: **************@vmware.com@***************
secret_key: ***************
host: ***vcd.vchs.vmware.com
port: 443

The rest of the YAML stays exactly the same. I just took out the security groups and floating IPs definition I did not have ready yet in my vCloud plugin.

Since Cloudify let you define your cloud application orchestration in an independent fashion then the actual cloud IaaS the application would run on, it is very easy to run the same application on different clouds with almost no change.

nodecellar

Using a Common Management and Orchestration as an abstraction to both VMware and OpenStack provides a common management and deployment infrastructure.
The application is kept aware of whether it is running on OpenStack or VMware. However, since the calls to each of the infrastructure components are now centralized into one driver per environment, it is managed once for all the applications.
Additionally, there is a default implementation for the built-in types, so in most cases the user will need to deal with the details of implementations of each element type only for specific customization

Posted in Uncategorized | Leave a comment

Cloudify Heat Plugin.

Openstack Heat makes orchestrating the deployment of multi Openstack elements a breeze using it “stack” concept.
By defining a HOT (Heat Orchestrating Template) document that describes the stack and “creating” the stack based on the document, Heat will orchestrate the deployment of many elements including Networks, Subnets, Ports, Floating IPs, Security Groups, Servers and many more.
Cloudify can harness Heat to bring up the Hardware stack and basically continue from were Heat deployment ends, adding software deployment workflows, monitoring and analytics for ongoing management of your deployments, on top of it.

In this post, I will explain and show example of one aspect of the integration that lets Cloudify “import” Heat stack and build a Cloudify Blueprint (TOSCA inspired) on top of it.

First, we define a Heat stack and deploy it. The we will use describes a network, subnet, port and floatingIP. It builds on pre-existing Router and public network, which we will supply as Parameters to the stack. We can either change the default value in the file or add an environment file to the deployment that will set these parameters. In our case, let’s change the default values.

Once it is ready, we can deploy it from the command line (if we have the openstack environment set up on our system, or we can log into Openstack web UI (Horizon) and go to the project=>Orchestration section.
Run the create stack from the UI or just type:

Heat stack-create –f ./simple_stack.yaml my_stack_name

Run

heat stack-list

or check the web UI to make sure the stack deployment completed successfully.

Heat stack topology

Once the stack is deployed, we can run the process to import it into Cloudify:
Get the tool by:

git clone git@github.com:yoramw/Cloudify-Heat-Plugin.git

Update the mapping file heat_mappings.json with Cloudify management network name and the stack name.
Run the import utility:

./bin/heat_resource_fetcher -s hello_stack -m ./heat_mappings.json –output-file ./out.yaml

The out.yaml output will become the basis of our Cloudify blueprint.
Viewing the out.yaml file, we can see the different Heat element in their Cloudify representation,

In order to build the Cloudify blueprint from the out.yaml, we simply open it and add out nodes and their relationships.

In our example, we will add a simple python web server node that will be deployed on top of the server instance “my_instance” that Heat deployed.

This server use the Cloudify bash plugin, so we will first add am import for this plugin in the imports and type definition sections:

http://www.getcloudify.org/spec/bash-plugin/1.0/plugin.yaml
types:
# A web server configured with bash scripts
cloudify.types.bash.web_server_with_index_and_image:
derived_from: cloudify.types.bash.web_server
properties:
– image_path
– index_path

Then we will add at the end of the file, the web server node itself:

– name: http_web_server
type: cloudify.types.bash.web_server_with_index_and_image
properties:
port: 8080
image_path: images/cloudify-logo.png
index_path: index.html
scripts:
configure: scripts/configure.sh
start: scripts/start.sh
stop: scripts/stop.sh
relationships:
– type: cloudify.relationships.contained_in
target: my_instance

The web server deployment depends on a few bash scripts and web resources. We will save this updated file into a new folder naming the file blueprint.yaml. to the same folder we will download the rest of the required resources from here

Next we need to bootstrap Cloudify manager into our cloud.
Download Cloudify if you do not already have it and run the following command:

cfy init openstack

A config file named Cloudify-config.yaml was generated.
Open the config file and update the parameters to suit your environment (mainly the credentials, image ID, flavor and public network).
In out case we want Cloudify to be deployed on the network Heat generated so we will update the network & subnet to the ones Heat deployed:

int_network:
create_if_missing: false
name: my_app_network
subnet:
create_if_missing: false
name: my_app_subnet

Next we will run the bootstrap process:

cfy bootstrap

It will take a couple of minutes for the Cloudify management node to complete its provisioning.

Once it is done, we can verify that the server is ready by running

cfy status

All service should appear as running, which leads us to uploading the blueprint we generated:

cfy blueprint upload –b hello_stack ./myblueprintfolder/blueprint.yaml

Cloudify Blueprint

Once we uploaded the blueprint successfully, we can create an instance of a deployment from this blueprint reference:

cfy deployments create –b hello_stack –d my_stack

In order to start the deployment workflow we issue the following command:

cfy deployments execute –d my_stack install

The deployment process should take a few minutes and in the meanwhile, it is a good idea to open the web ui and view the deployment progress. It is available by typing the server IP (from the bootstrap command) in the browser.

Cloudify Deployment installation is done

Once all is done, you can see that Cloudify shows the Heat deployed elements representation in Cloudify with the corresponding relationships.
In addition, you can see the Cloudify deployed Python web server that contained inside the Heat deployed server instance.

Posted in Uncategorized | Leave a comment

Bootstrapping Cloudify on Devstack

Cloudify 3 is a major milestone for Gigaspaces. It tightens our integration with Openstack and steer the product architecture to closely match the Openstack architecture stack. There are several ways to get started with Cloudify 3 on Openstack; You may use an Openstack public cloud such as HP Cloud, your can use your organization own Openstack private cloud. There is an option to download a vagrant box that will let you bootstrap Cloudify inside a single VM. Finally, there is an option to run Devstack and use it as your own private cloud for bootstrapping Cloudify 3. Setting up and running Devstack is pretty straight forward and you can use the official quick guide as your reference: http://devstack.org/ You need to ensure that the computer running devstack has enough resources for the Devstack + 3 additional VMs running inside it (16GB ram is highly recommended, but 8 GB should work too) Please add an Ubuntu image (12.04 LTS is recommended) from the Ubuntu official cloud images (http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img ) to your new DevStack environment. Once devstack is installed, I recommend signing into it web UI and spawning a small Ubuntu instance as a validation that the installation was indeed successful. Next, go to the Cloudify web site at: http://getcloudify.org/downloads/get_cloudify_3x.html Choose the relevant Debian package (either 32 or 64 bit). Install the package: dpkg –I ./ cloudify-cli_3.0.0-ga-b6_amd64.deb Next we will need to configure Cloudify to use its Openstack plugin for bootstrapping. It is recommended to do it in a dedicated folder: mkdir cloudify_work cd Cloudify_work cfy init openstack Follow the steps on the Cloudify quickstart-openstack guide at http://getcloudify.org/guide/3.0/quickstart-openstack.html Devstack defaults for the cloudify-config.yaml:

##Credential section:
keystone:
username: admin
password: password
tenant_name: password
auth_url: h t t p : : / / [YOUR-DEVSTACK-IP]:5000/v2.0
##Networking section:
networking:
subnet:
#Choose here an IP of a DNS server accessiable from your devstack machine.
dns_nameservers: [‘8.8.8.8’]
ext_network:
#choose your Openstack public network name. The DevStack default is “public”
name: public
##Compute section:
compute:
management_server:
instance:
# flavor and image ids are also environment specific and will have to be overridden. Image id is the image id that you generated when you added the Ubuntu image to Devstack.
flavor: 2
image: ####-####-####-####
view raw gistfile1.yml hosted with ❤ by GitHub
Next we will follow quick start guide steps. To verify the the bootstrapping succeeded after you ran the cty bootstrap command, you may run cfy status and get the IP of the Cloudify management node and a list of the running services on it. Before executing the blueprint upload step in the guide,there are a couple of environment changes to the blueprint.yaml to match our Devstack environment:
types:
vm_host:
- server:
image:[The Ubuntu image id we used for bootstrapping]
flavor: 2
blueprint:
- name: floatingip
type: cloudify.openstack.floatingip
properties:
floatingip:
floating_network_name: public
view raw gistfile1.yml hosted with ❤ by GitHub
Once you are done with the steps in the quick guide, you can view the node-cellar deployment also from you Devstack horizon UI. You will see there the two VMs that were provisioned in addition to the Cloudify management VM.
Cloudify UI There we are, Node-cellar is deployed and you can use it as a reference to building and running your own blueprints.

Posted in Uncategorized | Leave a comment

Auto-scale and Auto-heal your state-full Apache/tomcat service on OpenStack

Architecting and building a web service in the cloud age is quite simple.
Options range from web site generators such as Wix, to Paas providers such as GAE, all the way to the traditional LAMP setup hosted on IaaS that gives you the maximum control and customization power.
Quoting Spider-man, “with great power comes with great responsibility”… In our case, if you choose to LAMP or its variants on IaaS, you have the responsibility to ensure proper service level that in many cases require high availability configuration to minimize downtime as well as ability to scale the service as user traffic increases.
Such service level requirement typically translates to putting your front-end web servers behind a load-balancer and allowing the application to scale out to multiple web servers.
In case your web service is state-full, additional considerations typically results in ability to distribute the session context management and in some cases instruct the load-balancer to enforce sticky session load balancing algorithms.

Cloudify, a DevOps automation tool which is basically equivalent of Amazon OpsWorks on Openstack, lets you get all this “great responsibility” with significantly less effort and help you abstract you architecture from the actual IaaS you will choose to work with to keep the flexibility to change vendor in the future or create a service that utilize more than one IaaS vendor.

In this post, I will show you how to easily deploy web service based on tomcat web servers, XAP distribute session management and Apache that serves as a load-balancer as a service to the front end the tomcat servers.
TomcatWebServiceDeploymentDiagram

The Apache load balancing allows us to keep a VIP the internet know about and hide behind it arbitrary number of web servers.

How do I actually use it?

1. Download Cloudify from www.cloudifysource.org
2. Download the HttpSession sample recipe from https://github.com/yoramw/cloudify-recipes/tree/master/apps/HttpSession and its services from https://github.com/yoramw/cloudify-recipes/tree/master/services (both apps and services folders should be in the same folder hierarchy as it appears in the recipes folder)
3. Start the Cloudify CLI (bin/cloudify.sh (or bat for windows)
4. Run > bootstrap-cloud
5. Start the WebUI from the URL that the CLI printed once the bootstrapping was done.
6. Install your recipe app > install-application -timeout 30 /apps/HttpSession
7. Wait for Cloudify to deploy the servies (it should take 5 – 20 minutes depending on the cloud provider speed).
8. You can bring additional tomcats or shut them down using > set-instances tomcat <# of desired instances>
9. You may run some load on your new web service and see how it behaves using: > invoke apacheLB load 35000 100 (3500 requests by 100 concurrent requesters)

As you can see, the deployment is very simple. Cloudify configures everything for you and connect the services together.
You can use the same recipe to deploy your testing/staging environment as well as the production environment.
Changing the deployment to a different provider just means bootstrapping a different cloud and installing the application recipe there exactly the same way.

Behind the scenes:

Using XAP as the distributed session store requires Apache Shiro.
The tomcat recipe takes care of connecting Shiro to XAP.
If you want to dive into the details and adjust configurations, I recommend reading the Gigaspaces paper on global Http session sharing at http://wiki.gigaspaces.com/wiki/display/SBP/Global+Http+Session+Sharing.

In order to enable Shiro in your own application, use the HttpSession example as a starting point. Place the shiro.ini from the HttpSession example in your app WEB-INF, add the shiro filter to the app web.xml and add the jars to the lib folder as shown in the example recipe.

In the recipe, the ApacheLB recipe bring up an Apache that servers as the load balancer. It configures the service to either respect sticky sessions or not by setting the “useStickysession” property in the apacheLB properties file to true or false.

When a tomcat service instance complete its installations, it will register itself with the ApacheLB which will automatically add it to its pool ready web server.
When a tomcat service it orderly brought down, the first step it does is to remove itself from the ApacheLB pool or ready servers.

The ApacheLB recipe also let you generate load to test your setup by utilizing the Apache ab command line utility.

 

To sum things up, Cloudify takes the hassle, time and effort from deploying your highly available and scalable web app while letting you hold on to gains of having the most flexibility in designing and building your application the LAMP way…

Posted in Uncategorized | Leave a comment

On-borading state-full highly-available applications to the cloud

Deploying highly available state-full web application to the cloud using Cloudify.

Deploying web applications to the cloud is a growing trend.
Cloudify is a one if the popular tools that let you do it with ease and turn it into a seamlessly repeatable procedure that virtualize the actual cloud infrastructure from the deployment. Cloudify does not stop there, it continue to monitor the deployment and take actions in cases of failures or changes in load requirements.

When dealing with state-full web applications, the deployment becomes a bit more challenging with the need to properly configure the lead balancer for stickiness as well as turn the session into a highly available, distributed store that can be accessed from all the web containers.
These additions make sure that in most cases, the user interaction with the web tier will remain on a single web container that has the session in memory, as well as be able to continue the same session even in case this single web container fails as the user is routed to another web container.
Gigaspaces other product, XAP has done this for customers for many year. Now we bring this pattern and an easy deployable Cloudify recipe.

The Cloudify recipe includes the following services:
1. ApacheLB as the loadbalancer
2. Tomcat instances as the web tier
3. XAP (For distributed session store)
a. Manager
b. PU
c. Web-UI

ApacheLB recipe installs Apache, add the required modules for load balancing and provide a custom command for adding back-end nodes.


customCommands ([
"addNode" : "apacheLB_addNode.groovy",
"removeNode" : "apacheLB_removeNode.groovy",
"load" : "apacheLB-load.groovy"
])

The Tomcat recipe installs and configures Tomcat. Deploy a web application and configure the Tomcat to utilize XAP for distribute session using the shiro apache filter. Upon successful start of the tomcat, the ApacheLB custom command for adding this tomcat instance to the load-balancer is triggered.

Finally, XAP installation deploys the XAP Data Grid product to provide distributed and fault tolerant session.

The combination of the load-balancer, multiple tomcat instances and redundant XAP data grid, ensure your service will be highly available and maintain user sessions for seamless interaction even in the case of partial failure which statistically will happen at some point in the future…

Posted in Uncategorized | Leave a comment

Cloudify and IBM InfoSphere BigInsights

Following Nati’s blog post about big data in the cloud, this post is focused on Cloudify’s integration with IBM InfoSphere BigInsights, diving into the integration specifics and how to get your feet wet with running the Cloudify BigInsights recipe  hands-on.

The IBM  InfoSphere BigInsights product at its core uses the Hadoop framework with IBM improvements and additions focused on making it tailored for Enterprise customers by adding administrative, workflow, provisioning, and security features, along with best-in-class analytical capabilities from IBM Research.

Cloudify’s value for BigInsights-based applications:

As Nati explained in his post, applications typically consist of set of services with inter- dependencies and relationships. BigInsights itself is a set of services, and a typical application will utilize some of its services plus additional home-grown or commercial services. Cloudify provides the application owner the following benefits:

  1. Consistent Management
    1. Deployment automation
    2. Automation of post-deployment operations
    3. SLA-based monitoring and auto-scaling
  2. Cloud Enablement and Portability

Let’s dive into the actual integration and see how these line items map to the Cloudify BigInsights recipe:

Deployment automation:

When building a Cloudify recipe we have to decide between using the existing installer vs. manually installing each component on each node and tying it all together. We decided to utilize the provided installer to capitalize on the existing BigInsights tool and be as closely aligned with how IBM intended the tool to be used. The sequence of events to get to a working BigInsights service is as follows:

  1. Analyze the service and application recipe to decide on the initial cluster topology.
  2. Provision new servers or allocate existing servers (from a cloud or existing hardware in the enterprise) to satisfy the topology requirements.
  3. Prepare the cluster nodes for the BigInsights installer (fulfilling the install prerequisites and requirements such as consistent hostname naming, password SSH or passwords, software packages…)
  4. Build a silent install XML file based on the actual cluster nodes and the topology.
  5. Run the installer and verify everything is working when it is done.

This takes care of bringing up the BigInsights cluster and letting us hook it up to the rest of the services.

Automation of post-deployment operations:

Post deployment operations in Cloudify are handled by Cloudify’s built-in service management capabilities, such as enabling dynamic adjustment of the number of instances each service will have.  In addition to the generic built-in capabilities, which in the BigInsights case can be used, for example, to change the number of data nodes in the cluster, Cloudify recipes define “Custom Commands” that handle specific post-deployment operations.

In the BigInsights recipe we have custom commands that handle Hadoop operations such as adding and removing Hadoop services (Flume, HBase regions, Zookeeper…) to/from existing nodes, re-balancing the cluster, running DfsAdmin commands as well as DFS commands, all from the Cloudify console.

SLA-based monitoring and auto-scaling:

In addition to the option I mentioned earlier to manually set the number of nodes in the cluster during run-time, Cloudify monitors the application’s services and lets us define, in the recipe, SLA-driven policies that can dynamically change the cluster size and the balance between the different services based on the monitoring metrics.

The BigInsights recipe monitors the Hadoop service using a JMX MBeans that Hadoop exposes. The metrics we monitor can easily be changed by editing the list below from the master-service.groovy recipe:


monitors {
def nameNodeJmxBeans = [
"Total Files": ["Hadoop:name=FSNamesystemMetrics,service=NameNode", "FilesTotal"],
"Total Blocks": ["Hadoop:name=FSNamesystemMetrics,service=NameNode", "BlocksTotal"],
"Capacity Used (GB)": ["Hadoop:name=FSNamesystemMetrics,service=NameNode", "CapacityUsedGB"],
"Blocks with corrupt replicas": ["Hadoop:name=FSNamesystemMetrics,service=NameNode", "CorruptBlocks"],
"Storage capacity utilization": ["Hadoop:name=NameNodeInfo,service=NameNode", "PercentUsed"],
"Number of active metrics sources": ["Hadoop:name=MetricsSystem,service=NameNode,sub=Stats", "num_sources"],
"Number of active metrics sinks": ["Hadoop:name=MetricsSystem,service=NameNode,sub=Stats", "num_sinks"],
"Number of ops for snapshot stats": ["Hadoop:name=MetricsSystem,service=NameNode,sub=Stats", "snapshot_num_ops"],
"Average time for snapshot stats": ["Hadoop:name=MetricsSystem,service=NameNode,sub=Stats", "snapshot_avg_time"],
"Number of ops for publishing stats": ["Hadoop:name=MetricsSystem,service=NameNode,sub=Stats", "publish_num_ops"],
"Average time for publishing stats": ["Hadoop:name=MetricsSystem,service=NameNode,sub=Stats", "publish_avg_time"],
"Dropped updates by all sinks": ["Hadoop:name=MetricsSystem,service=NameNode,sub=Stats", "dropped_pub_all"],
]
return JmxMonitors.getJmxMetrics("127.0.0.1",nameNodeJmxPort,nameNodeJmxBeans)
}

These metrics are then tied to visual widgets that will be shown in the Cloudify Web-UI interface and can be referenced in the SLA definition.

For this version of the recipe, we decided to skip automatic scaling rules and let the user control the scaling by custom commands, since in Hadoop, automatic scaling and specifically re-balancing the cluster based on it has to take into account future workloads that are planned to run on it since this can be a lengthy process that actually decreases performance until it is done.    

Cloud Enablement and Portability:

Cloudify handles the cloud enablement and portability using Cloud Drivers which abstracts the cloud or bare-metal specific provisioning and management details from the recipe. There are built-in drivers for popular clouds such as Openstack, EC2, RackSpace and more as well as a BYON driver to handle your bare-metal servers.

The Cloud driver let you define hardware templates that will be available to your recipe as well as your cloud credentials.

For the BigInsight recipe, we define two templates that we will later referenced from the recipe. Here is the template definition for the Openstack cloud driver:


MASTER : template{
imageId "414"
machineMemoryMB 1600
hardwareId "103"
remoteDirectory "/root/gs-files"
localDirectory "tools/cli/plugins/esc/openstack/upload"
keyFile "key.pem"
options ([
"openstack.securityGroup" : "myGroup",
"openstack.keyPair" : "key"
])
},
DATA : template{
imageId "414"
machineMemoryMB 1600
hardwareId "102"
remoteDirectory "/root/gs-files"
localDirectory "tools/cli/plugins/esc/openstack/upload"
keyFile "key.pem"
options ([
"openstack.securityGroup" : "myGroup",
"openstack.keyPair" : "key"
])
}

Finally, let’s dive into a hands-on on-boarding of BigInsights in the cloud:

The recipe is located at BigInsights App folder & BigInsights Service folder.

Download the recipe and do the following :

  1. The recipe expects two server templates: MASTER & DATA. You will need to edit the cloud driver you will use (under Cloudify home/tools/cli/plugins/esc/… and add the two templates (shown above) to the existing SMALL_LINUX template.

Deployment automation:

  1. Copy the BigInsights recipe to the recipes folder. Verify you have a BigInsights folder under the services and the apps folders under the Cloudify home/recipes root folder.
  2. Open the Cloudify console and bootstrap your favorite cloud (which has the two templates defined in #1)
  3. Install the default biginsights applications by running the following line (assuming current directory is Cloudify home/bin)”install-application -timeout 45 ../recipes/apps/hadoop-biginsights”

Automation of post-deployment operations:

  1. To add additional data nodes manually, just increase the number of dataOnDemand service instances by running the following command:
    set-instances dataOnDemand X(where X is a number higher than the current number of instances and bound by the max instances count defined in the recipe – default is set to a max of 3)
  2. To rebalance the HDFS cluster after we added data nodes you can run the following command:
    invoke master rebalance
  3. To add an HBase region to one of the existing data nodes run the following custom command:
    invoke master addNode x.x.x.x hbase (where x.x.x.x is the IP of the data node instance)
  4. You can also trigger dfs and dfsAdmin commands from the Cloudify console, for example:
    invoke master dfs -ls

SLA-based monitoring and auto-scaling:

  1. Open the Cloudify Web-Ui and select the BigInsights application. You will see the deployment progress and can start the IBM BigInsights management UI directly from the services section of the master service.
  2. From the same Cloudify Web-UI, make sure the master service in the BigInsights application is selected. click on the Metrics tab in the middle of the page. You will see the Hadoop metrics shown in the GUI widgets as we defined in the master-service.groovy recipe.
    https://gist.github.com/3507945

Here is a short video that captures the boostrapping and deployment of BigInsights using Cloudify:

Posted in Cloud Computing | Tagged | Leave a comment