POLICY ENGINE Research: Design & Language IRT Lab, Columbia University.
-
Upload
lucinda-phelps -
Category
Documents
-
view
216 -
download
1
Transcript of POLICY ENGINE Research: Design & Language IRT Lab, Columbia University.
IRT Lab, Columbia University
POLICY ENGINEResearch: Design & Language
IRT Lab, Columbia University 2
Control Middleware
IRT Lab, Columbia University 3
• Current ImplementationIssue: Different functions to communicate with different
managers
• Objective: modular extensible design
Design: Extensibility
IRT Lab, Columbia University 4
• Can we leverage MIH design ?
Design: Extensibility
IRT Lab, Columbia University 5
Leverage MIH Registration
• Policy Engine (PE) is similar to MIH Function
IRT Lab, Columbia University 6
Events and Services
• Manager provides events and services– Events:
• Link change• Network change• Location change
– Services• Subscriber can query the value of certain
attributes(current location, current bandwidth, etc.)
• Subscriber can send decisions/notifications
• Policy engine subscribes to events and services (put in tables)
IRT Lab, Columbia University 7
Tables in Policy Engine
• Event Table– Events from each connected manager– Each event will trigger a different
evaluation
• Service Table– Subscribed services from managers or
other modules–Methods / API to access the service also
recorded
IRT Lab, Columbia University
POLICY LANGUAGEResearch: Design & Language
IRT Lab, Columbia University 9
Why ?
• Environment–Mobile Devices– Heterogeneous networks/interfaces
• Network selection and handover– Default rules–Manual switch
• Control middleware– Unified policy language
IRT Lab, Columbia University 10
Use Cases
• Different network usage for different user scenarios– Different places (office / home)– Different time (morning / evening)– Different activities (working / playing)– Different devices (monitors / phones)– Different security options
• Application preferences– Some need high throughput– Some need lowest money cost
IRT Lab, Columbia University 11
Related Works
• Policy expression framework– Standard schemas– P3P / SAML / …
• Rule-based policies – Accountability in RDF (AIR)
• Custom configurations– For particular applications
IRT Lab, Columbia University 12
Language Design
• Attributes• Evaluation• Decision• Knowledge base
IRT Lab, Columbia University 13
Language Semantics
• Attribute : Network / System• Term : Value (Binary evaluation) or
range (T/F)• Score : Evaluate score from one attribute • Eval : Trigger evaluation and generate facts
(each event will trigger one eval, which may evaluate several terms and scores)• Fact : Predicate Logic unit• Rule : Making decisions by forward chaining
(facts => actions)
IRT Lab, Columbia University 14
Attributes
• Deterministic– Location– Time– Scenario profiles (meeting / working /
traveling)– Security– Devices
• Non-Deterministic– Quality of Service (QoS)– System Resources– Expense/Cost
IRT Lab, Columbia University 15
Attribute Representation
• Direct attributes– Corresponding entry in the service table
• Derived attributes– Combination of attributes– Evaluated as a single attribute
• Attribute table:– Name of the attribute– How to get the value of the attribute
Example:derived attr {
id: bp_ratioattr: bandwidth
(x1)attr: price (x2)func: x1 / x2
}
IRT Lab, Columbia University 16
Deterministic Evaluation
• Binary (Yes/No)• Range (In range or not)• Term– id– attr : evaluated attribute– values : expected values– low : lowest value– high : highest value
Example:term {
id: at_homeattr: locationvalues: home
}
term {id: mid_bwattr:
bandwidthlow: 50high: 200
}
IRT Lab, Columbia University
Non-deterministic Evaluation
• Attribute value -> score (0 ~ 100)• Priority (used as weight when evaluating multiple
scores)
• Score– id: name of the score– attr: evaluated attr(s)– how: max, min, order– priority : 0 ~ 1
Example:score {
id: bw_score
attr: bandwidth
order: linear
max: 1000min: 0priority:
0.7}
0 10.9bp_ratio
0.7signal
0.2latency
Example:score {
id: price_score
attr: priceorder:
linearmax: 0min: 1000priority:
0.9}
17
IRT Lab, Columbia University 18
Eval
• Event triggered– id: used by Event Table
• Generate facts– attributes: directly turned– terms: which satisfies– scores: highest score• Only one score• Multiple scores: weighted sum
• fact: predicate(params)
Examples:eval {
id: eval1{ attr: location
=> location(x) }
{ term: mid_bw=>
mid_bw(x) }{ score: bw_score
=> highest_bw(x) }
{ score: price_score=>
cheapest(x) }{ score: bw_score score: price_score
=> most_worth(x) }}
IRT Lab, Columbia University 19
Decision
• Attribute evaluations -> Facts• Rule Engine: Facts -> Actions• Actions: decisions / facts• Decisions: Leverage service table
Example:rule {
location(home)cheapest(x)=>decide(1, x)
history(home, x)}
IRT Lab, Columbia University 20
Knowledge Base
• Taking historyinto account
Data Store
IRT Lab, Columbia University 21
Implementation
• Policy Language– Syntax abstracted from the examples– LRM to be developed
• Implementation language: C–More efficient– Inside system– Rule Engine can leverage logic language• Prolog
IRT Lab, Columbia University 22
Conclusion
• Structures– attr, term, score, eval, fact, rule
• Advantages– Configurable & Programmable– Configurations can be simple (XML/JSON)– Programmable rule chains can generate
intelligent decisions– Extensible (attributes, evaluations, rules)
IRT Lab, Columbia University 23
Next Steps
• Implementation• Evaluation