Why not estimate on sub-tasks and roll that up for Velocity and Commitment?
Many teams break down stories in to sub tasks shortly before the sprint begins so they can use the stories for tracking. This raises the possibility of using the sum of the estimates on the sub-tasks as a way to decide which issues to commit to in the sprint (and potentially for velocity).
As described above, tracking is really a separate process from estimation and velocity. The estimates that are applied to the sub tasks are clearly higher accuracy than those that were originally applied to the story. Using them for velocity would cause the velocity to have both high and low accuracy estimates, making it unusable for looking looking further out in the backlog where stories have only low accuracy estimates. In addition, only items near the top of the top of the backlog are likely to have been broken to tasks, so using task estimates for velocity means that the velocity value could only ever predict the time to complete the backlog up to the last story that has been broken in to tasks.
Microsoft Store online does not accept checks, money orders, Microsoft Retail Gift Cards, or other forms of payment. Currently Microsoft Retail Gift Cards may only be used at participating Microsoft Store retail locations. Please see the Retail Gift Card Agreement for more information.
mysql - When designing my database schema in SQLAlchemy, how often do I need to use "backrefs"? - Stack Overflow
Excellent backref explanation
For understand backref please check the given example
from sqlalchemy import create_engine, ForeignKey from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship, backref from sqlalchemy import Column, Integer, String from sqlalchemy import Table, Text engine = create_engine('mysql://root:ababab@localhost/alctest', echo=False) Base = declarative_base() class User(Base): __tablename__ = 'users' id = Column(Integer, primary_key = True) name = Column(String(100)) fullname = Column(String(100)) password = Column(String(100)) def __init__(self, name, fullname, password): self.name = name self.fullname = fullname self.password = password def __repr__(self): return "<User('%s','%s', '%s')>" % (self.name, self.fullname, self.password) class Address(Base): __tablename__ = 'addresses' id = Column(Integer, primary_key = True) email_address = Column(String(100), nullable=False) #foreign key, must define relationship user_id = Column(Integer, ForeignKey('users.id')) user = relationship("User", backref = backref('addresses',order_by=id))
This is very simple example. Now if you want to access the
addressesthen you can directly get from
addressesmodel. This will return you the
If you want the object of the
userwhich is associated with this
addressesthen you have to use
Now in reverse case if you want to access
userthen you have to run query like
SELECT * FROM addresses WHERE user_id = $1
But if you will use
backrefthen sqlalchemy will run this query when you access that attribute.
Now in our example there if you will access
userobject.addressesthen it will run the query
SELECT * FROM addresses WHERE user_id = userobject.id
There is no attribute like
Usermodel this will be set by
If I'm designing a schema, I always have a primary key column called "id" with an autoincrement sequence against it. Sometimes I'll have a domain table, with key varchar, description varchar.
Perhaps you deal with existing systems you didn't design. Aside from that - where is the cause to use a multi-column primary key?
There are strong opinions on it, but I firmly believe that "always have an autoincremental id" is a very wrong approach.
Read http://it.toolbox.com/blogs/database-soup/primary-keyvil-par... (an often-quoted series of blog posts on why surrogate numeric keys are evil).
In general, if you have an entity with a well-defined natural key (users and their logins, shipments and their tracking codes), why would you use an autogenerated key that's meaningless?
Case in point, my current schema, designed from scratch. Users can have multiple dashboards, each dashboard has a label. A user cannot have multiple dashboards with the same label, but the two users can of course use the same label for their dashboards (like "sales" or "mydashboard"). The dashboard's table primary key is (username, label) - that pair uniquely identifies a dashboard, so it's a perfect PK candidate.
Also, as already noted in one response, many-to-many relationships are typically modelled by an intermediate table that has foreign keys to both sides of the many-to-many, and its foreign key is the sum of these foreign keys.
EDIT: added a concrete example
Shared by Jeremy Gollehon, 4 saves total
102 down vote accepted
- Primary keys should be as small as necessary. Prefer a numeric type because numeric types are stored in a much more compact format than character formats. This is because most primary keys will be foreign keys in another table as well as used in multiple indexes. The smaller your key, the smaller the index, the less pages in the cache you will use.
- Primary keys should never change. Updating a primary key should always be out of the question. This is because it is most likely to be used in multiple indexes and used as a foreign key. Updating a single primary key could cause of ripple effect of changes.
- Do NOT use "your problem primary key" as your logic model primary key. For example passport number, social security number, or employee contract number as these "primary key" can change for real world situations.
I follow a few rules:
On surrogate vs natural key, I refer to the rules above. If the natural key is small and will never change it can be used as a primary key. If the natural key is large or likely to change I use surrogate keys. If there is no primary key I still make a surrogate key because experience shows you will always add tables to your schema and wish you'd put a primary key in place.
Thankfully, there is an obscure feature hidden in Firefox to make it highlight only the word on a double-click and ignore the following whitespace. Open