I generaly have two, complementary sets of clothing. Usualy I wear: hat or cycling cap, wind/rain jacket, cycling gloves, cycling jersey, cycling shorts, socks and shoes. The complementary set: fleece cap (a beanie), fleece top, underwear (usually bathing trunks), trousers and second pair of socks is in the bag and comes into use on extremely cold days or nights, when off the bike or when the first set is in the wash. Other than that there are some additional items for the rain: rain pants, overshoes and gloves. I carry these in the hood pouch of the rain jacket - after I cut off the hood. Often I take a second T-shirt - sorry to disappoint you, everybody makes compromises sometime.
I’ve been very lucky that Dom [Mason] from Mason Cycles has given me a bike to use. It’s a four-seaon bike that’s designed for long-distance riding so it’s actually perfect.
In short. Cholesterol is healthy, saturated fat is healthy, salt is healthy and sugar is unhealthy. I have pulled those four points out of a press release by the Academy of Nutrition and Dietetics, which I reproduce in full, below.
Hey, my boss said to talk to you – I hear you know a lot about web apps?
-Yeah, I’m more of a distributed systems guy now. I’m just back from ContainerCamp and Gluecon and I’m going to Dockercon next week. Really excited about the way the industry is moving – making everything simpler and more reliable. It’s the future!
Cool. I’m just building a simple web app at the moment – a normal CRUD app using Rails, going to deploy to Heroku. Is that still the way to go?
-Oh no. That’s old school. Heroku is dead – no-one uses it anymore. You need to use Docker now. It’s the future.
Oh, OK. What’s that?
-Docker is this new way of doing containerization. It’s like LXC, but it’s also a packaging format, a distribution platform, and tools to make distributed systems really easy.
Containeri.. — what now? What’s LXE?
I'm new to packaging and publishing on PyPI. I've been searching for information about the best practices about modules and namespaces, and using
__init__.pyto manipulate them. Yet I haven't been able to find a generally accepted approach.
Considering a package with multiple modules (and possibly sub-packages), there seems to be 3 different approaches:
__init__.pyblank. This enforces explicit imports and thus clear namespaces. I've read Alex Martelli post in favor of this option on various questions at Stack Overflow. The cons are, the user of the package has to import seperate modules and call them with the dot notation.
Import all modules in
__init__.py. The user doesn't have to do multiple imports. The cons are explicit vs implicit, and also as Martelli puts it (paraphrased), "if that's how your package works, maybe it should all go in a single module anyway".
Import key functions from various modules directly into the package namespace. If you restructure modules, you still have the option to keep the same API for end users. Cons, it dirties the namespace, and very implicit / hacky.
I've been looking around at Github, and I've seen all 3 approaches in various projects. So I'm guessing there isn't really a consensus on what is the best practice.
I'm really interested in hearing from the pros, how they think about this, what they do, what they suggest... Or maybe there are subtleties that I'm missing... Any help is appreciated.
Thanks in advance
Option 3 is definitely how this should work, here's why. You start out with your library having a package "foo" and a module "bar". Users make use of things inside of "bar" like,
from foo.bar import x, y, z. Then one day, "bar" starts getting really big, the implementations for things become more complex and broken out, features are added. The way you deal with this is by making
bar.pyinto a package, and the
bar.py. Your users see no change in API, and there's no need for them to learn exactly which submodule inside the new
barpackage they need to use (nor should there be, as things can keep changing many more times - it wouldn't be correct to expose the userbase to each of those changes when it's entirely unnecessary).
There's nothing hacky about this at all, it's how
__init__.pyis meant to be used, and to those saying "explicit is better than implicit" I'd counter with "practicality beats purity" and "flat is better than nested", not to mention a foolish consistency is the hobgoblin of little minds.
Flask-Admin is a pretty powerful administration solution, which, however lacks several important features for managing models with many-to-many relationships (or at least clear documentation thereof). Here are some useful (albeit perhaps unreliable) hacks to get Flask-Admin SQLAlchemy models to dance the dance.
In my last blog post, I outlined a few interesting results from a word2wec model trained on half a million news documents. This was pleasantly met with some positive reactions, some of which not necessarily due to the scientific rigour of the report but due to awareness effect of such "populist treatment of the subject" on the community. On the other hand, there were more than some negative reactions. Some believing I was "cherry-picking" and reporting only a handful of interesting results out of an ocean of mediocre performances. Others rejecting my claim that training on a small dataset in any language can produce very encouraging results. And yet others literally threatening me so that I would release the code despite I reiterating the code is small and not the point.
This is a drug that acts very similar to GLYX-13 and is being developed by the same company Naurex. Unlike GLYX-13 though, it has not received “fast track” status (yet) from the FDA. It functions by acting as an NMDA receptor partial agonist with a selective affinity for the glycine site. It differs from GLYX-13 in that it was created in oral form, is significantly more potent, and is being researched as a standalone antidepressant option (as opposed to an adjunct).
In early studies, it has shown rapid antidepressant effects and is considered very well-tolerated in preliminary trials. Like GLYX-13, this drug works quickly and is considered an improved version of ketamine. It acts similarly, but taking it won’t result in dissociative or hallucinogenic side effects. Plus it produces a quick, highly-potent antidepressant response. During May 2014 it was in Phase II(A) clinical trials for the treatment of major depression.
There's a lot of talk these days about how governments use all the data they can put their hands on, to monitor every individual in the world. Capabilities offered by big data storage and analytic processing are immense, when in the hands of professional, capable data scientists. Last week the National Security Agency was under the spotlight, a month ago it was the IRS (Income Revenue Service) for a biased auditing selection algorithm, and maybe next month it will be the CDC for some other monitoring, privacy or profiling issue. Of course private corporations are not exempt either. Indeed they are sometimes accused of collusion with government agencies, to share private data.
Is it really that bad? Maybe not. First, it should come as no surprise that Intelligence agencies collect and process as much data as they can: that's their role, by definition. I don't know a single person who has been harassed or interrogated by mistake. Indeed I don't even know a single person who has been contacted by the NSA. Even when I applied for NSA data science jobs, nobody ever talked or emailed me back. They have better things to do than looking in everyone's files.
For the average person, knowing that all your online, mobile and maybe driving, medical, and other aspects of your live are tracked, can help. First, you can start paying most transactions with cash rather than credit cards. Going to the doctor anonymously (in my case, I have no doctor and no medical records - I just don't use official healthcare). You can use an alternate currency that leaves no trail - I'm working on this, working on an anonymous digital currency for bartering. On Facebook you can create fake profiles. And for email, use encryption technology: we are working on a new email web app (SaaS) that allows two individuals to exchange messages totally anonymously: once the encrypted message has been decrypted (by the intended recipient) or 48 hours after it was encrypted - whichever comes first - it can never be decrypted again, and it is not stored anywhere. In short, if the government seizes the servers and database of this company, there's no way - by design - to reconstruct or decipher the messages from customers. More details on the technology later, but it's an example of data science used to protect people against their government.
Finally, here is an interesting test that you could do to check the government's real intentions. Create false security alarms, and see what happens. Example: you pretend that you want to collect a sample of each of the 100 or so elements in our universe (gold, helium, sodium, iron, uranium, polonium, plutonium, etc). You start with all the elements above 80, to make sure you can secure thwse ones first. Chances are very high that you are going to receive a visit from the NSA. The way the meeting goes, and how they treat you (e.g. Sorry but you need a permit to keep polonium at home vs. they throw you in jail right away) will tell you if they are mean and evil, or instead care about national security only.
Those who really think that the government is going too far could saturate these security agencies, with bogus cases like the one described above: they'll make these agencies spend all their time focusing on threats that are not real. But don't count on me to help with this: I think the privacy issues are grossly exaggerated.
On a different subject, the problem of leaks is cause by a few factors. People who recently leaked information do not appear to be better than the agencies that they have deceived. Also, due to severe restrictions in the hiring process (you need a clearance to work for them), these agencies don't necessarily get the best, most faithful employees.
Inspectorobject can be used to get lists of schema objects from the database. Here, we pre-gather all named constraints and table names, and drop everything. This is better than using
metadata.reflect(); metadata.drop_all()as it handles cyclical constraints between tables.
When you create a SQLite database in memory it is only accessible to the particular thread that created it - change
create_engine('sqlite:////some/file/path/db.sqlite'and your tables will exist.
As to why you are seeing the tables created twice - Flask in debug mode by default runs with a server that reloads every time you change your code. In order to do when it starts it spawns a new process that actually runs the server - so your
init_dbfunction is called before you start the server, then it is called again when the server creates a child process to serve requests.
This page tells you about the biological therapy crizotinib (pronounced cris-ot-tin-ib) and its possible side effects. There are sections about
Crizotinib is a type of drug called a tyrosine kinase inhibitor (TKI). Tyrosine kinases are enzymes (proteins that act as chemical messengers). There are many different tyrosine kinases and they can stimulate cancer cells to grow.
Crizotinib is a treatment for some people with advanced lung cancer. It is also known by its brand name Xalkori (pronounced Zal-cor-ee).
Crizotinib blocks an enzyme called anaplastic lymphoma kinase (ALK). Some lung cancer cells have an overactive version of ALK. Blocking ALK with crizotinib can stop the cells growing but it only works in cancers with the overactive enzyme. About 1 in 20 people (5%) with non small cell lung cancer (NSCLC) have the overactive ALK enzyme.
Using SSH is slow on my devices, because the Android devices CPU doesn't seem to be powerful enough to encrypt more than data at rates more than 200kbps. But it is secure enough to be used over WiFi and mobile networks, so it is quite useful to set up. There are two ways to do this: From your device, or from your computer.
SSH from your Android device to your Desktop
I strongly recommend setting this up. It's easy, and super useful.
Set up an SSH server on your Desktop. (You probably have this done already.)
Install ES File Explorer on your Android device.
Navigate to ES File Explorer -> Network -> SFTP and enter your login information.
For better security, generate an ssh key on your desktop, copy it over to your phone and use it. This way your precious UNIX password is not stored in clear text on the android device. (Expiring a compromised SSH key is a lot easier than changing your password on all your machines.)
SSH from your Desktop to your Android
This is not as useful, but does have a certain "geek appeal". It's also very easy to do: Just go to the Google play store and install any one of many SSH Servers. Start it and follow the instructions.
It probably comes as no surprise, but we talk to a lot of data scientists at CrowdFlower. We like learning the tools they use, the programs that make their lives easier, and how everything works together. Today, we'll really pleased to unveil the first of a three-part series about the data science ecosystem. Here it is in infographic form because, let's face it, everybody likes infographics:
from application import db
5 down vote accepted
I found the solution is overriding the method 'get_query'
It should return a SQLAlchemy query object.
def get_query(self): role = current_user.role if role == 'contributor': return # filtered query elif role == 'admin': return # unfiltered query
We do this in our app by overriding ModelView.
I looked through the source code a bit for Flask-Admin, and they've made the API easier to use since we last edited this code, because it looks like you can just do:
from flask.ext.admin.contrib.sqla.view import ModelView, func class PaidOrderView(ModelVew): def get_query(self): return self.session.query(self.model).filter(self.model.paid==True) def get_count_query(self): return self.session.query(func.count('*')).filter(self.model.paid==True)
(We were overriding get_list() which is not nearly as great.)
You can then use it like:
Let me know if that doesn't work for you and I can take another look.
A recent study using NASA’s CALIPSO satellite described how wind and weather carry millions of tons of dust from the Sahara desert to the Amazon basin each year – bringing much-needed fertilizers like phosphorus to the Amazon’s depleted soils.
To bring this story to life, NASA Goddard’s Scientific Visualization team produced a video showing the path of the Saharan dust, which has been viewed half a million times. This story is notable because it relies on satellite technology and data to show how one ecosystem’s health is deeply interconnected with another ecosystem on the other side of the world.