Wednesday, March 05, 2014

Stochastic Gradient Methods 2014

Last week I attended Stochastic Gradient Methods workshop held at UCLA's IPAM . Surprisingly, there's still quite a bit of activity and unsolved questions around what is essentially, minimizing a quadratic function.

In 2009 Strohmer and Vershinin rediscovered an algorithm used for solving linear systems of equations from 1970 -- Kaczmarz method, and showed that this algorithm is a form of Stochastic Gradient. This view of SGD motivates a biased sampling strategy which gives faster convergence rate than regular Stochastic Gradient. This spurred a flurry of activity, motivating results in at least 5 different lectures.

In 2010, Nesterov showed that Randomized Coordinate Descent has a faster convergence rate than SGD, and in 2013 Singer showed a way to accelerate it to quadratic convergence. In 2013 Richtarik gave an alternative algorithm to get the same convergence rate, but also comes up with better step sizes that rely on sparsity pattern of the problem.

Summaries of talks I attended with links to slides are below:

Ben Recht

Gave an overview of Hogwild and Jellyfish methods. Hogwild has been covered a few times before at NIPS, but here's an overview slide



Jellyfish (described in their Large Scale Matrix completion paper) chooses sampling order in a way to minimize lock contention.

Also talked about their work on explaining the gap between performance of SGD sampling with replacement vs. without replacement. Empirically, without replacement works better (see Section 5 of "Beneath the valley" paper) yet until recently tools were missing to explain it. They are able to prove faster rates of no-replacement sampling for Kaczmarz algorithm by relying on "Noncommutative arithmetic-geometric mean inequality."

Resources:


  • Slides Recht - we should all run hogwild!.pdf
  • Beneath the valley of the noncommutative arithmetic-geometric mean inequality: conjectures, case-studies, and consequences. http://arxiv.org/abs/1202.4184
  • Parallel Stochastic Gradient Algorithms for Large-Scale Matrix Completion. Recht and Re. 2011.
  • HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent. Niu, Recht, Re, and Wright. 2011.

Yoram Singer

Talked about accelerating coordinate descent with momentum-like approach, dubbed Generalized Accelerated Gradient Descent. Nesterov's accelerated gradient method has quadratic convergence with linear dependence on condition number of the loss
$$O\left(\frac{L}{k^2}\right)$$

Parallel coordinate descent depends on average of per-coordinate Lipschitz constants, which can be much better for badly conditioned loss:
$$O\left(\frac{\bar{L_i}}{k}\right)$$

The methods proposed has quadratic convergence of accelerated gradient, meanwhile retaining dependence on average curvature, rather than the worst
$$O\left(\frac{\bar{L_i}}{k^2}\right)$$

Resources:



Dimitri Bertsekas

In-depth tutorial "Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Unified Framework"

One slide that stuck out is the one-dimensional illustration of why SGD works.


In the farout region, all gradients are pointing in the same direction, so taking gradient step with respect to a single component function works just as well as looking at the full sum.

This also serves as the motivation for "heavy ball" method (Polyak, 1964). When you are in farout region, you want to accelerate, while in confusion region, you want to decelerate, you can accomplish this by modifying gradient update formula as follows

$$x_{x+1} = x_k-\alpha_k \nabla f_{i_k}(x_k)+\beta_k(x_k-x_{k-1})$$

This is similar in spirit to "Accelerated Stochastic Approximation" of Kesten (1958) which grows the step size if the difference between successive $x$'s is the same sign, and shrinks if there are many sign changes.

Schmidt said Stochastic Averaged Gradient works better than Kesten's approach in a multi-dimensional setting.

Resources:



Peter Richtarik


Gave overview of his "Accelerated, Parallel and Proximal Coordinate Descent". It gives a technical improvement over his previous work "Distributed coordinate descent method for learning with big data" (http://arxiv.org/abs/1310.2059) which seems to have the meat of the contributions.

Here's a slide from his talk comparing various methods.


"Prox" column means the algorithm can take proximal steps, i.e., can be used with constraints and not-nice regularizers. "Accel" or Accelerated is whether the method is enjoys $O(1/k^2)$ convergence rate where $k$ is the iteration counter. "General f" means it applies for convex problems rather than quadratic. "Block" is whether method can update some of the coordinates at a time rather than all coordinates.

Setting of the problem is summarized in slide below


You are optimizing a sum of losses $f_e$, and not all losses depend on all examples. You want to update your sets of coordinates in blocks, in parallel. Sets of variables involved for each $f_e$ determine how well you can parallelize the problem. In half-a-dozen papers on his website he develops framework dubbed Expected Separable Overapproximation (ESO) to analyze such problems.

One outcome of ESO approach is a formula that incorporates sparsity of the problem into calculation of step size. See Table 3 of his approx paper (http://arxiv.org/pdf/1312.5799v1.pdf)

$$v_i = \sum_{j=1}^m (1+\frac{(\omega_j-1)(\tau-1)}{\max(1,n-1)}A_{ji}^2$$

This is formula for step size for coordinate $i$ in randomized coordinate descent, computed as a sum over examples $j$. Quantity $\omega_j$, is the number of components of vector that example $x_j$ depends on, $n$ is dimensionality, and $\tau$ is the number of coordinates are updated in parallel. $A$ is the matrix of quadratic minimization problem, replaced with matrix of per-coordinate Lipschitz constant for general convex problems.

Resources:



Rachel Ward and Deanna Needell


Gave background on their paper "Stochastic Gradient Descent and the Randomized Kaczmarz Algorithm". The setting:



Further in presentation, developed importance sampling for SGD. Traditionally, SGD picks random component of the sum above, and the number of steps required to reach given accuracy is proportional to the worst condition number (Lipshitz constant) over per-example losses.

Derived following formula for the number of steps needed to reach given accuracy $\epsilon$ with uniform sampling

$$k \propto \log \epsilon (\sup_i \frac{L_i}{\mu} + \epsilon^{-1} \frac{\sigma^2}{\mu^2})$$

For quadratics, the first term is close to largest condition number out of all component functions $f_i$, except you are normalizing by global smallest eigenvalue $\mu$, rather than per-component smallest eigenvalue $\mu_i$. The second term is "normalized consistency" - expected squared norm of the squared gradient divided by smallest eigenvalue squared.

Instead of uniform sampling, we can sample examples in linear proportion to Lipshitz constant for gradient of loss on that example. This cuts down the number of steps needed to average Lipschitz constant, normalized by strong convexity parameter $mu$, rather than largest Lipschitz constant. Since Lipschitz constant is the upper bound on the largest eigenvalue of the Hessian, this means number of steps grows in proportion to average condition number rather than the largest condition number.

The term involving Lipschitz constant now drops from max to average. In other words we get this:
$$\sup_i \frac{L_i}{\mu} \to \frac{\bar{L}}{\mu}$$

The second (consistency) term can instead potentially get larger, we get
$$\frac{\sigma^2}{\mu^2}\to \frac{\bar{L}\sigma^2}{\inf_i L_i \mu^2}$$

The best trade off depends on details of function -- badly conditioned, but accurate gradients -- sample proportionally to Lipshitz. Well conditioned and noisy gradients, do closer to uniform. She shows that if we sample halways between uniform and Lipshitz, so called "partially biased sampling", both terms are guaranteed to be smaller than for uniform sampling.

Yuriy Nesterov obtained similar bounds for sampling strategy and convergence in his "Efficiency of coordinate descent methods on huge-scale optimization problems". Key difference is that he samples which coordinate to update at each step, instead of sampling examples. Optimal sampling strategy comes down to picking coordinates in linear proportion to their Lipschitz constant, and the convergence rate also drops to the average of per-coordinate Lipschitz constants rather than the worst Lipschtitz constant. Roughly speaking, number of steps till convergence goes down to average eigenvalue of Hessian rather than worst eigenvalue.

Deanna Needell gave background on the Kaczmarz Algorithm which gives alternative way to motivate importance sampling results. In particular, first few slides illustrate why the order matters. She also gives analytic expression to find the best next point to sample for the quadratic case. This requires $O(\text{# of examples})$ search at each iteration. She then shows approximation approach based on dimensionality reduction that takes $O(1)$ time per step.

Ben Recht's made a similar point on impact of choosing better ordering in his presentation



Resources:



Stephen Wright

Started with a nomenclature discussion on how "Stochastic Gradient Descent" methods don't qualify as gradient descent, because SGD steps can be in ascent directions for the global cost function. Instead, they should be referred to as "Stochastic Gradient" methods. Every speaker afterwards corrected themselves on the usage.

Gave overview of parallel Kaczmarz method and then extended analysis to get convergence rate for parallel Kaczmars with "Inconsistent Read" allowed -- situation where the parameter vector gets modified while it's being read.

Resources:




Yann LeCun

Gave background on convolutional neural networks and showed demo of online learning using ImageNet. Basically it was network running network pre-trained on ImageNet, and using nearest neighbor in the embedding induced by activations of the last layer.



Impressively, it seems to do a good job learning to recognize from just a single example.

Also talked about connections between neural network learning and random matrix theory. You can see the connection if you rewrite activations of ReLU neural network as follows

$$\sum_P C(x) \prod_{(i,j) \in P} W_{i,j}$$

The sum is over all paths through active nodes from input layer to output node. Coefficients $C_x$ depend on input data. This is a polynomial with degree equal to the number of layers, and there are results from random matrix theory says that if coefficients $C(x)$ are Gaussian distributed, then local minima are close together in energy, so essentially, finding local minimum is as good as finding global minimum.

Resources:



Francis Bach

Presented results on convergence rates of SGD and how they are affected by lack of strong convexity.

Resources:




Jorge Nocedal

Talked about adaptation of quasi-Newton method to stochastic setting. Convergence of SGD depends on square of condition number, meanwhile Newton's method is independent of condition number, at the cost of step size that that costs $O(\text{dimensions}^2)$

The compromise he proposes is to do BFGS-like method where
1. You use exact Hessian information to compute product of Hessian and step direction
2. You only do it once every 20 iterations

This makes the cost of l-BFGS-like update similar to SGD update.

Resources



Asuman Ozdaglar

Introduced a way to extend ADMM to graph-structured problems without having to choose the order of updates. The setting of problem is summarized below


As you may recall, ADMM works by decoupling components of the loss by having each loss operate on their own copy of the parameters. You alternate between each function minimizing itself locally with their own copy of parameters, and setting values of parameters locally from the functions that have already been minimized.

This steps can be implemented as message passing on a factor graph -- factors here are components of the cost function, whereas nodes are variables that the cost function depends on. Each component function depends on a subset of variable, and each variable is involved ina  subset of component functions.

Implementation of ADMM is similar to Divide and Concur, where a readable overview is given in Yedidia's Message-Passing paper.



One inconvenience of this approach is that it requires establishing an arbitrary order of message updates.

Ozdaglar's idea is to introduce the symmetry by adding extra variables, one for each direction of the constraint variable, and adding an extra constraint that forces them to agree. The update is done in parallel, like parallel BP, followed by an extra step that synchronizes the extra constraint variables.

Resources:



John Duchi

John Ducchi gave a white-board talk on convergence of 0-th order optimization. Happily, the convergence is only a factor of sqrt(dimensions) worse than standard SGD.

Started with succinct derivation of non-asymptotic error of proximal average algorithm, which looks a lot like averaged SGD, after $k$ steps, in terms of errors of gradients. The actual formula has no O-terms and proof is found in notes, but roughly it looks like this

$$E(\text{error}) <= O(\frac{1}{\sqrt{k}})+\frac{1}{k}\sum_{i=1}^{k} E[\|\epsilon_i\|]$$

Error here is in terms of value of the function, which is what we care about in applications (as opposed to distance from true parameter vector). As $k$ increases, the second term vanishes and you get the regular $1/\sqrt{k}$ convergence. If you don't care about constraints, "prox" step can be replaced by an SGD step.

Resources:


Mark Schmidt

Gave an overview of their Stochastic Averaged Gradient algorithm. Full details and many extensions are in their hefty 45 page arxiv paper.

Their motivation is a method to combine fast initial convergence for stochastic method, and fast late-stage convergence of full-gradient methods, while keeping cheap iteration cost of stochastic gradient.



Stochastic Averaged Gradient reaches this goal with a simple modification of stochastic gradient. The idea is that at each gradient step, in addition to the gradient computed for the current data point, you also add up all the gradients computed on previous datapoints. Those gradients may be out of date, but for strongly convex loss with convex component functions, this staleness doesn't hurt.

Schmidt et al advocated sampling datapoints with high curvature more often based on the argument that such gradient might be changing faster, and needs to be evaluated more often. However, the formal justification of this intuition is not avaiable, and instead they fall back on the same analysis as Kaczmarz importance sampling described earlier.

One difference of weighted sampling from standard SGD setting is that examples can be sampled more often without needing to correct for this bias because the weight of each gradient is $1/n$ in SAG regardless of how many times the function is sampled. However, bias correction will come up as an issue in any large scale adaptation of SAG when you can't store all gradients in memory.

Resources:


Lin Xiao

Gave an overview of stochastic variance reduction gradient methods. The idea of variance reduction is to periodically evaluate full gradient, and then use it to adjust future gradient steps. If we evaluated full gradient at previous point $\tilde{x}$, formula for gradient update becomes as follows

$$x_{k+1}=x_k - \nu (\nabla f_{i_k} - \nabla f_{i_k}(\tilde{x})+\nabla F(\tilde{x}))$$

Here $\nabla{F(\tilde{x})}$ is the full gradient evaluated at some previous point $\tilde{x}$, $\nabla{f_{i_k}(x_k)}$ is the gradient evaluated at loss for current example $x_k$.

The idea of variance reduction is illustrated below.



On the left you see what would happen if you applied gradient reduction formula with $\nabla{F(\tilde{x})}$ computed at each step. That reduces to regular gradient descent. If instead we evaluate full gradient once every $k$ iterations, the correction will be based on stale value of gradient and not quite correct, however the mean error is zero so it gives an unbiased estimate of the correction term.

Then they introduce a weighted sampling strategy, where datapoints are sampled proportionally to the condition number of individual loss functions. When number of iterations is much larger than number of examples, weighted sampling strategy drops convergence to $O(C_{\text{avg}}$ steps as opposed to $O(C_\max)$ steps for uniform sampling, C_{\text{avg}} is the average condition number over all component loss functions.

Resources:



James Spall

Gave results on Stochastic Approximation methods. Aproximation can be seen as minimization of distance between solution and ideal solution, so SA methods come down to some form of stochastic optimization. The difference is that the setting is more general - non-convexity, can't compute gradients, possibly discrete problem.

Standard approach to derivative free methods is Finite Difference Stochastic Approximation (FDSA) where to numerically compute gradient, you do $2p$ function evaluations where $p$ is dimensionality.

The idea of Simultaneous Perturbation Stochastic Approximation method (SPSA) is to evaluate gradient along randomly chosen directions, and take step in that directions with step-length proportional to gradient in that direction. This requires two function evaluations instead of $2 p$ for FDSA, and works just as well.

Two summary slides from the SPSA talk:



Here was a graph of numerical simulation of SPSA vs standard approach


He gave a more in-depth overview of the methods in 2012 NIPS talk. It's available as youtube video, but here are screenshots of some intro slides.

Simple SPSA is essentially a first order method, and has the same problems as other first order methods:

  • sensitivity to upscaling of units of $\theta$
  • slow convergence in the final phase

To address these, he introduces Adaptive Stochastic Approximation by the Simultaneous Perturbation Method (ASP) which goes further by numerically estimating the Hessian in addition to the gradient.

The approach to approximating Hessian is similar in spirit to SPSA -- compute gradient in two random directions, and estimate the Hessian numerically from that (formula 2.2 in "Adaptive Stochastic Approximation by the Simultaneous Perturbation Method"). This requires 4 function evaluations.

This estimate is noisy, so use momentum to smooth Hessian estimates.

More recent work (Spall 2009) gives an improved formula for estimating the Hessian numerically using what he calls "feedback term".

Adaptive SPSA methods store Hessian approximation explicitly, like BFGS, so they aren't directly applicable to deep neural nets.

Resources:



102 comments:

Igor said...

Awesome, thanks Yaroslav !

Igor.

Anonymous said...

Dear Yaroslav,

This is a very nice summary of the talks; great job.

Let me offer a few minor points regarding my talk:

i) The 'Hydra' paper (Distributed Coordinate Descent) is very different from the 'APPROX' paper. In fact, there is essentially no technical intersection between the two. They are related, but in a complementary way.

The Hydra method focuses on the computation ESO for a distributed sampling, an on proving that partitioning of coordinates among nodes at most doubles the number of iterations. The analysis applies to the strongly convex case.

The approx method focuses on designing and analyzing accelerated coordinate descent methods which 'work'. Also, the paper comes up with new stepsize for *any* coordinate descent method based on the concept of ESO (including Hydra). That development is orthogonal to the APPROX method itself.

ii) The 'setting' slide is from a different talk (a new analysis of Hogwild!) I gave a year ago - but the paper has not yet been put online.

Peter

denis said...
This comment has been removed by the author.
denis said...

Yaroslav,
a useful collection, thank you !


Would you know of standard test functions on the web that several of these people have used ?
There are SO many methods and variants, with not much in the way of a table

"method, simplicity e.g. lines of code, link to online test runs".

Hannah Baker said...

Learn about how to Add my school College on Easy Shiksha.Com

Tareq Hasan said...

We realize that whatever we do is a statement. Whatever conscious decision we make is a statement because it tells other people something. You see a woman walking wearing bright red lipstick; she’s making a statement. See more mba statement of purpose example

Unknown said...

thank the good topic.
Welcome To Casino online Please Click the website
thank you.
gclub
gclub online
goldenslot

Unknown said...

nice blog
android training in bangalore
ios training in bangalore
machine learning online training

Unknown said...

useful blog
python interview questions
cognos interview questions
perl interview questions
vlsi interview questions
web api interview questions
msbi interview questions

Unknown said...

laravel interview questions
aem interview questions
salesforce interview questions
oops abab interview questions
itil interview questions
informatica interview questions
extjs interview questions

Unknown said...

sap bi interview questions
hive interview questions
seo interview questions
as400 interview questions
wordpress interview questions
accounting interview questions
basic accounting and financial interview questions

amar said...

Iot Training in Bangalore
Machine Learning Training in Bangalore
Pcb Training in Bangalore
Devops Training in Bangalore

Coepd said...

We at Coepd declared Data Science Internship Programs (Self sponsored) for professionals who want to have hands on experience. We are providing this program in alliance with IT Companies in COEPD Hyderabad premises. This program is dedicated to our unwavering participants predominantly acknowledging and appreciating the fact that they are on the path of making a career in Data Science discipline. This internship is designed to ensure that in addition to gaining the requisite theoretical knowledge, the readers gain sufficient hands-on practice and practical know-how to master the nitty-gritty of the Data Science profession. More than a training institute, COEPD today stands differentiated as a mission to help you "Build your dream career" - COEPD way.

http://www.coepd.com/AnalyticsInternship.html

Abir said...

Some methods are really useful while you are doing your academic things like this stochastic methods could have been one for you. https://www.bachelorthesis.biz/our-tips-on-how-to-write-a-perfect-thesis-for-bachelor-degree-in-psychology/ to find out more helpful tips on writing.

Jamu said...

There is the area of the Stochastic Gradient methods in which you can see different kinds of the workshops for the better experience. Just visit site and find the best helping tools with the small steps and enjoy the best quadratic function.

follow the link said...

I'm so positive minded person. I don't think negative so much. Mainly I know that I can do it. And If any man thinks that he will do this then he will able to do this. Nobody can stop him. And I trust this.

hima bindu said...

Good Blog

Usefull Blog

Machine learning in Vijayawada

Unknown said...

Thanks for sharing this in here. You are running a great blog, keep up this good work.
best machine learning institute in chennai | Machine Learning course in chennai

Ava Lou said...

You should follow these methods if you wanted to do good in your side. http://www.readmission.biz/our-services/dismissal-appeal-letter-template/ you will get help on the admission side.

Anonymous said...

Selenium is one of the most popular automated testing tool used to automate various types of applications. Selenium is a package of several testing tools designed in a way for to support and encourage automation testing of functional aspects of web-based applications and a wide range of browsers and platforms and for the same reason, it is referred to as a Suite.

Selenium Interview Questions and Answers
Javascript Interview Questions
Human Resource (HR) Interview Questions

Diya shree said...

Thank you for sharing your article. Great efforts put it to find the list of articles which is very useful to know, Definitely will share the same to other forums.
Data Science Training in chennai at Credo Systemz | data science course fees in chennai | data science course in chennai quora | data science with python training in chennai

John Oneal said...

Bitdefender Contact Number
McAfee Phone Number Canada
Norton Customer Service Canada
Avast Customer Service Phone Number

vinith said...

You completed certain reliable points there. I did a search on the subject and found nearly all persons will agree with your blog.
machine learning course in bangalore

Anonymous said...

Excellent Blog! I would like to thank for the efforts you have made in writing this post. I am hoping the same best work from you in the future as well. I wanted to thank you for this websites! Thanks for sharing. Great websites! Now please do visit our website which will be very helpful.
machine learning course bangalore

Hemant Latawa said...

I'm affluent, rich, and wealthy and I live a lavish lifestyle. Education India

Lucky Patcher said...

vlc apk

draj said...

thanks for sharing this blog,try this blog too...
Seo Internship in Bangalore
Smo Internship in Bangalore
Digital Marketing Internship Program in Bangalore

luckys said...

1movies

Raj Maan said...

lifestyle whatsapp groups

luckys said...

gta 5 apk

abhi said...

Nice blog Thank you very much for the information you shared.

Digital Marketing Internship In Bangalore

Seo Internship In Bangalore

Internship Programs in Bangalore

Vicky said...

Nice article admin thanks for share your atricle keep share your knowledge i am waiting for your new post check the north face school uniform kindly review and reply me

abhi said...

Nice blog Thank you.

Seo Internship In Bangalore

Internship Programs in Bangalore

Digital Marketing Internship In Bangalore



maan singh said...

Nice article
Please support
Bittorrent premium apk add free us.

abhi said...

Nice blog Thank you.

Seo Internship In Bangalore

Internship Programs in Bangalore

Digital Marketing Internship In Bangalore



Devender Gupta said...

https://gizmoxo.com/facebook-stylish-names-list/
https://gizmoxo.com/netflix-cookies/
https://gizmoxo.com/free-netflix-account/
https://gizmoxo.com/netflix-mod-apk-premium/
https://gizmoxo.com/download-kingoroot-apk/

Devender Gupta said...

https://dtcbus.online/
https://dtcbus.online/pass
https://dtcbus.online/route
https://dtcbus.online/download

Unknown said...

Social Apples

Devender Gupta said...

funny WiFi names
cool WiFi names
claver WiFi names
cool and claver WiFi names
funny WiFi names list
Good wifi names
wifi names 2020
Disney wifi names
wifi name for gamers
bollywood movies funny WiFi names

Unknown said...

Download Bible And Note Apk Latest Version

Abhishek rajpoot said...

WhatsApp Status Video Download :WhatsApp introduced the status feature in 2015, in which we can share images, videos, and gifs as our story for 24 hours. Before this feature, WhatsApp had only text status option in which we can write our bio, but the new status feature is different. The story or status disappears after 24 hours and can’t be archived as still in WhatsApp.

Boy attitude status video download for whatsApp
Boy attitude status video download
Boy attitude status video download

Most romantic status video download for whatsApp
Sad video status download
Most Romantic status video download

video status download for whatsApp


we have latest & best collection of video status download for whatsapp

Admin said...

tamilrockers

Unknown said...

GBWhatsApp is a far more convenient and modified version of WhatsApp – available for use by everyone on the Internet. Currently, the application only operates on Android smartphones and is not supported by iOS devices.for more Updates Visit Our modapkplus.com

Sandhu said...

Filmyhit
teri meri jodi full movie download Filmyhit
section 375 full movie download Filmyhit
fauji di family full movie download
family 420 full movie download
Vipp2541
Filmyhit Download Hollywood Bollywood movies
Sanju Full Movie Download Filmyhit
Whatsapp Group Links
Whatsapp Group Links

Sandhu said...

Minecraft x-ray resource pack
Satta king 786
dhankesari
Filmyhit
Xray texture pack
xray ultimate texture packs
optifine hd mod
Naukar Vahuti Full movie download
Mission Mangal Full movie download

Devender Gupta said...

WhatsApp group links

MIUI theme

Updated Tech News said...

For IOT Training in Bangalore Visit: IOT Training in Bangalore

Devender Gupta said...

Dr Driving is one of the my favourite game ever and today I am going to share Dr Driving Mod Apk
https://www.drdrivingmodapk.xyz/

john said...

Great Article
IEEE final year projects on machine learning


JavaScript Training in Chennai

Final Year Project Centers in Chennai



JavaScript Training in Chennai

Devender Gupta said...

redmi note 8 pro gcam

https://www.google.com/amp/s/gizmoxo.com/install-google-camera-gcam-on-redmi-note-8-pro/%3famp

parvina said...

Just love your article.you might be interested in :app editor pro apk

Lily said...

You can also check this app :
new armored core

ghostus said...

If you are looking for GTA 5 iOS, then you have landed on the right page. Here you will get all information about GTA 5 iOS Apk including a tutorial on how to download and install GTA 5 on iOS.

Gizdoc said...

Really great article, you guys are doing a great work, keep posting good stuff, If your audience is into Tech do share 30 MiUi 11 Tips and Tricks
Gizdoc Tech Paradise

Anonymous said...


Hello, o you know that the best way to boost your brain is
visiting or contacting us


https://weiiitrading.com/our-products/moonrock-carts/buy-empty-moonrock-clear-vape-cartridges-blue-carts-dr-zodiak-atomizers-with-flavor-box-packaging/

https://weiiitrading.com/our-products/heavy-hitters-carts/buy-wholesale-new-heavy-hitter-vape-cartridges-1-0ml-ceramic-coil-empty-tank-carts-510-thread-thick-oil-atomizer/

https://weiiitrading.com/our-products/juul-carts/buy-hot-empty-ceramic-pod-disassembled-cartridges-0-7ml-1-0ml-vape-pod-carts-for-vape-juul-vape-pen-start-kit-top-quality/

https://weiiitrading.com/our-products/mario-carts/buy-peaches-and-dream/

https://weiiitrading.com/our-products/heavy-hitters-carts/buy-bubba-kush-cartridge-2-2g/

https://weiiitrading.com/our-products/mario-carts/buy-thin-mint-cookies/



Pila Brass Knuckles Online for sale online,

where to buy Buy Pila Brass Knuckles Online

buy valley online cartridge

buy space candy online

buy cannabis syrup online

buy botox online

cannabis bread

uk chese

47 dank vapewhite fire og

buy moonrock



Email Us

Contact: +1 619-537-6734

sare said...

tweakbox apk good application for your android mobile btw nice blog and share good information thank you.

gbapps said...

Marvelous work in giving the correct substance the sensible explanation. The substance looks legitimate with critical information. Unfathomable Work

Instagram Plus APK is known worldwide as the modded version of Official Instagram with the inclusion of the latest features that Instagram lacks ...Read More

pnjsharptech said...

PNJ Sharptech is a leading Social Media Optimization company in India, specializing in handling both organic and paid Social Media Marketing (SMM) campaigns successfully. We have many years of experiencing increasing online social presence on various social media platforms such as Facebook, Twitter, LinkedIn and Pinterest, and many others. Our SMO experts have a rich knowledge of increasing traffic and maintaining the online social reputation for a long period. How our SMO services make you different from others? Our low-cost social media marketing services are very helpful to build your online reputation and increase sales.

BB Arora said...

40 Lakh mp3 song download pagalworld, tik tok viral song ,Mr jatt. GetSongName.com – Presenting the audio song ” 40 Lakh ” this song by Jerry Burj Ft. Ellde Fazilka , song is been written Ellde Fazilka40 Lakh mp3 song download pagalworld

BB Arora said...

Guglu muglu with its Punjabi Song LYRICS by Jasmine Sandlas in songwriting of Ranbir Grewal is a latest track. Lavkesh Vishwakarma is director music Rosleen Sandlas shot the music video for it. Read Guglu muglu Song Full Song Lyrics and also watch the official music video.GUGLU MUGLU Song LYRICS – Jasmine Sandlas

jacobs said...

great article you posted for us. thanks dude.
Tekken 7 APK+ISO. In this game, the fighters fight against the enemies for their survival. The fighters have different controls and fighting styles that help in the battle. Here is the latest 2020 and updated version of taken 7 APK + ISO game. you will like it most.
https://modsroid.com/tekken-7-apk-download-official-latest-version-2020

Sangita said...

Everything is very open with a precise clarification of the challenges. Thank you for the beautiful blog.
Reactjs Training in Bangalore
UI Development Training in Bangalore

priyanka said...

I like viewing web sites which comprehend the price of delivering the excellent useful resource free of charge. Thank you!
machine learning course in pune

Dataexpert010 said...

Very nice job... Thanks for sharing this amazing Machine Learning Courses In Pune and educative blog post!

GB Whatsapp said...

Nice article. That’s exactly what I was searching for. Thanks for the info. Also you can check out FM Whatsapp details to get a better version of whatsapp.

Ethan said...

Amazing post i really like it thanks for sharing with us!!! OGWhatsApp 2020

Madhu said...

read this

Anonymous said...

If you want to download Kinemaster Pro free from the official website

Anonymous said...

Download free video maker Viva Video Pro for free

Anonymous said...

New Antiban GB Whatsapp Apk 2020 Download Download from this post

Anonymous said...

Today published the new GBmods for GB Whatsapp Pro Download now

Anonymous said...

Mobdro for Smart Tv Download

Tunu said...

SSC Exam Result is a trending now in Bangladesh. SSC Examination was completed. All students are waiting to get their SSC Exam Result 2020. This year SSC Result 2020 will publish on 1st week in May. On the result day, after 12PM all students will get their SSC Exam Result 2020 from Bangladesh Education Board official website. Also Dakhil Result 2020 and SSC Vocational Result 2020 will publish on 7 May 2020

Tunu said...

SSC Exam Result is a trending now in Bangladesh. SSC Examination was completed. All students are waiting to get their SSC Exam Result 2020. This year SSC Result 2020 will publish on 1st week in May. On the result day, after 12PM all students will get their SSC Exam Result 2020 from Bangladesh Education Board official website. Also Dakhil Result 2020 and SSC Vocational Result 2020 will publish on 7 May 2020

Tunu said...

SSC Exam Result 2020 in Bangladesh. SSC Result coming soon. All students are waiting to get their SSC Result 2020. This year SSC Result 2020 will publish on 07 May 2020. Students can collect their SSC Result 2020 from Bangladesh Education Board official website.

Tunu said...

HSC Examination was completed few days ago. Now HSC Result 2020 is the trending topic in Bangladesh. All HSC Examinee waiting to get their HSC Result. All candidates will check their HSC Result 2020 from Bangladesh Educational official website at educationboardresults.gov.bd.

Tunu said...

Are you movie lover like me? If you want to download all new Hollywood and Bollywood movies. Then visit MovieRulz Website 2020. You will find here Movierulz New Link 2020. So, visit this website and watch or download all latest movies.

rice purity said...

I recently came across your article and have been reading along. I want to express my admiration of your writing skill and ability to make readers read from the beginning to the end. I would like to read newer posts and to share my thoughts with yougbwhatsappfree.com/fmwhatsapp-apk

Klaus Morgan said...

Thanks for sharing Shala Darpan with us, guys! You are doing a great thing!

brayne said...

buy adderall online

buy adderall

adderall for sale

buy vyvanse online

xanax use

buy xanax

buy vyvanse pill

buy adderall xr online

buy oxycodone online

order oxycodone online

buy oxycodone

buy oxycodone online

dihydrocodeine dosage

buy dihydrodeine online

tramadol doses

buy oxycontin

buy oxycontin online

oxycontin for sale

rice purity said...

I have bookmarked your website because this site contains valuable information in it. I am really happy with articles quality and presentation. Thanks a lot for keeping great stuff. I am very much thankful for this site.notepadd ++ mac

Anonymous said...


123movies

raziabib said...

They're produced by the very best degree developers who will be distinguished for your polo dress creating. You'll find polo Ron Lauren inside exclusive array which include particular classes for men, women.rice purit

Anonymous said...

Thanks for share the Post. But I nedd to use the Anonytun Pro Apk So I have save your post on ES file Explorer 2020
which elp me to make a slideshow on Filmorago Pro Apk 2020or Cute Cut Apk for Android about your article.

anonymous said...

GBWhatsapp APK Downlaod
GBWhatsapp Pro Download
New Whatsapp Plus Download
Download Latest GBwhatsapp APK
New Whatsapp Plus Download
Latest GBInstagram APK Download

anonymous said...

Download Kinemaster Latest Version
Kinemaster PC Download
Download Kinemaster Mod APK

anonymous said...

Download Vivavideo APK
Vivavideo PC Downlaod

anonymous said...

Download acmarket APK
acmarket PC Downlaod

anonymous said...

Showbox apk official download
Showbox Windows 10 Download
Showbox Android Official Download
Showbox Running APK Download
Showbox apk official for PC

anonymous said...

Showbox apk download for android
Showbox Windows 10 Download
Showbox Android Official Download
Showbox apk download for PC

anonymous said...

Share it download for android
Share it for windows 10
Downlaod Windows Version of Share it APK
Download Share it for PC
Share it download for ios

Maxwell said...

2048 is an exceptionally addictive game accessible on the work area and cell phones. the version of 2048 doge doggy https://2048doge.blogspot.com/ play is quite easy to learn, however difficult to beat. It is additionally accessible on retro consoles. You can play the game on the web, or download it for iOS or Android.
2048 cupcakes
2048 cupcakes game
play 2048 cupcakes

zoozmobile said...

Very nice blog post. I absolutely love this site. Thanks!
GBWhatsApp Latest Version

Nita Perkins said...

Pretty! This was a really wonderful post. Thank you for providing these details.
Download Instagram Photos
TikTok Video Downloader
online resize image

Pythonupdates said...

Enjoyed reading the article above, really explains everything in detail, the article is very interesting and effective. Thank you and good luck for the upcoming articles Python Programming Course

Anonymous said...

Thanks so to 2048 game much, he at 2048 game has a natural rapport with people and communicates very well with others. This is a winner! You've shown a lot with 2048 game of patience with this.

vivoipl2020 said...

GBWhatsApp (GBWA) is the best solution for you. It is a modded version .
then GBWhatsApp APK is available to download .

ek said...

I have express a few of the articles on your website now, and I really like your style of blogging. I added it to my favorite’s blog site list and will be checking back soon…
More Info of Machine Learning

modded app said...

picsay pro mod apk
turbo vpn mod apk
nordvpn mod apk
opera mini mod apk

modded app said...

filmorago pro mod apk
beauty plus mod apk
hulu mod apk
hola vpn mod apk

Rajat Tyagi said...

If you are looking for latest sarkari upsc cds syllabus click here

vivoipl2020 said...

Vivavideo pro apk Download Latest Version .